Psycholinguistics & Neurolinguistics

Artificial intelligence history and features with details

Artificial intelligence

Artificial intelligence or computational intelligence is understood to be what machines or computers can display. This is a controversial term in the sense that it is difficult to define exactly what intelligence is, although it is usually understood as the ability of a computer system to rationally perceive its environment and adapt its strategies to achieve its objective. Artificial intelligence history and features

Commonly, artificial intelligence is limited to an imitation of human intelligence by machines, giving the user the impression of being in front of another being endowed with individuality.

However, as technology and computing advance, the emergence of true intelligent computing beings, commonly referred to as AI (Artificial Intelligence), is expected.

Features of artificial intelligence

Origin of the term

Although the idea of ​​the intelligent machine has been with us for a long time, with examples such as the Golem or the robots of Science Fiction, the term “Artificial Intelligence” was used for the first time in 1956, by John McCarthy, an eminent American computer writer who he contributed enormously to this type of study.

Concept

The concept of Artificial Intelligence is still diffuse. In general terms, it could refer to the attempt to build a computer system that reproduces and even transcends the thinking tasks of the human brain, with the same margin of autonomy, individuality, and creativity, but taking advantage of the advantages of the rapid and massive computation of the computers.

This concept usually encompasses the rational and logical aspects of thought, but it is difficult when faced with concepts of another nature such as love, commitment, or morality.

Schools of thought

The study of Artificial Intelligence covers two different schools:

  • Symbolic-deductive Artificial Intelligence. Also known as conventional Artificial Intelligence, it tries to understand and replicate human behavior from a formal and statistical analysis perspective.
  • Subsymbolic-inductive Artificial Intelligence. Also called Computational Artificial Intelligence, it pursues interactive learning and development, based on empirical data and modifications of connection parameters. Artificial intelligence history and features

Pillars

Four pillars of the study and development of Artificial Intelligence are considered:

  1. The search for the desired state, among the possible sets according to the actions offered at a given moment, that is, free choice.
  2. Genetic algorithms inspired by the human genetic code (DNA).
  3. Artificial neural networks, which mimic the functioning of organic brains.
  4. Formal logic reasoning, similar to the abstract thinking of humans.

Applications

Contemporary applications of Artificial Intelligence in its different prototypes and stages of development can be summarized as:

  • Video games and smart entertainment software.
  • Digital support for online services and computer programs.
  • Massive data and information processing systems.
  • Robotics and complex automation systems.

Turing test

One problem with AI is the real difficulty in distinguishing between a truly intelligent artificial system, and one program to give that impression to the user (fake it).

To do this, the English mathematician and computer scientist Alan Turing designed a test, later named in his honor, which consisted of making a person read a conversation between another individual and a computer programmed to imitate human intelligence in their responses.

If after 5 minutes the observer was unable to distinguish the machine from the person, the system would have passed the test. Artificial intelligence history and features

History

The first proper work in the field of Artificial Intelligence is the model of artificial neurons by Warren McCulloch and Walter Pitts in 1943, although this term had not yet been coined.

The arrival of Turing and his work in the area since 1950 would mean the inauguration of a computational branch that would grow by leaps and bounds during the 1960s and 1970s, with expert support systems in solving mathematical equations and, later, with scripts or Computer scripts.

An important milestone in Artificial Intelligence would occur in 1997: the chess player Gari Kasparov would lose to Deep Blue, a computer specialized in the game. Many would see in this the announcement of the smart computers to come. Artificial intelligence history and features

Fears

Artificial Intelligence is not always greeted with enthusiasm. Many see the possibility a technological tool capable of displacing many individuals from their positions work since the IA could do it in less time without taking breaks or claim for rights. Robotization represents both hope for industrialists and a threat for workers.

On the other hand, much has been warned about the dangers of a rational logic devoid of empathy, emotionality, and affective commitments, capable of making cold and painful decisions for humanity in pursuit of some abstract goal.

Fiction

Intelligent computers have been constant throughout science fiction and fantasy, both in film and literature, often acting as helpers and others as antagonists of the story.

The American writer Isaac Asimov was prolific in the matter, through his tales of robots. Many times the idea of ​​Artificial Intelligence is accompanied by dystopian scenarios and technological nightmares such as those represented in the films The Terminator (1984) or The Matrix (1999).

Future Advances

The many possible applications for a truly intelligent computer system are endless, but they point to the full automation of vehicle driving systems, industrial work, and assistance to people in various fields: at home, when conducting research, or in the workplace. the handling of telecommunications, for example. Artificial intelligence history and features

Related Articles

Leave a Reply

Your email address will not be published.

Back to top button