Infographie Intelligence Artificielle

The history of Artificial Intelligence

Artificial Intelligence was born in the 50’s, when almost simultaneously, several research works were carried out, especially in the United States. Let’s retrace the history of Artificial Intelligence through the key points of its evolution.

1950's: several stages reached

• 1950 –

Turing’s work (Turing Machine): Alan Turing, mathematician, develops and conceptualizes the premises of Artificial Intelligence. Thanks to imitation games, he mathematized human intelligence. He then carried out his famous Turing test, which evaluates the capacity of a machine to maintain a human conversation.

At the same time, Claude Shannon developed the theory of information at the University of Manchester. It allowed him to mathematize chess games and to calculate the best moves to make to win a game.


• 1956 – A key moment in the history of Artificial Intelligence. John McCarthy organizes the famous conference at Dartmouth University, accompanied by his colleagues Alan Turing, Newell, Samuel, Simon and Minski. He then gave his name to Artificial Intelligence. He also defined the goal of this technology as being to model human intelligence. Artificial Intelligence then became an academic discipline in its own right.


• 1958 – John McCarty creates the LISP language, specialized in Artificial Intelligence programming. He developed it at MIT.

 

• 1959 – The researcher and computer scientist A. Samuel mentions for the first time the notion of « Machine Learning ». Scientists realize that it is necessary to give all the necessary information to the machine so that it can « understand » its environment. Thus, it is necessary to teach the AI the basic social conditions that all humans know. For example: « if you cut a piece of butter in half, you get two pieces of butter, but if you cut a table in half, you don’t get two tables ». This is the beginning of Machine Learning.

Advances in Artificial Intelligence

1964 – Creation of ELIZA, a computer program developed by MIT, ancestor of the chatbot as we know it today.


• 1967 – Grenblatt, a researcher at MIT, develops the first automatic chess program that can defeat average players.


• 1971 – Development of the SHRDLU robot by Winograd at MIT, software that allows dialogues and question-answer games between AI and human.


• 1975 – « AI Winter »: AI research is slowing down. The lack of technical means does not allow the realization of models powerful enough to reach the expected results.


• 1997 – Deep Blue, created by IBM, defeats Garry Kasparov, then six-time world chess champion. This is considered a historic moment in the development of Artificial Intelligence technology. It is able to be more operational and intelligent than humans.


• The 2000’s – This is the beginning of the digital era. The democratization of computers, the arrival of smartphones and other disruptive technologies allow research and applications of Artificial Intelligence to develop and reach more and more sectors of activity

Artificial Intelligence: where are we today?

Today, we are witnessing a real development of the Deep Learning technology. This dazzling evolution has been able to develop exponentially in recent years thanks to several notable elements : 

  • Thanks to technological evolutions. Deep Learning has been developed thanks to the current power of graphics cards, notably via the manufacturer NVidia.
  • Thanks to the open source or public access to Machine Learning (especially for and by the scientific and academic community, which then has access to Machine Learning models and can thus advance more quickly in research in Artificial Intelligence).
  • Thanks to the development of specialized Deep Learning frameworks. Today, many frameworks are available to Artificial Intelligence insiders. They aim to help the development of Artificial Intelligence based on predictive analysis, or in other words, Deep Learning. Among the best known, we can mention Tensorflow or Caffe, two frameworks that have helped to democratize the use and development of Deep Learning.
 

• 2010 – Rush to Artificial Intelligence
o Development of new Artificial Intelligence processes: Deep Learning.
o Data is at the center of these new technologies.
o The biggest companies are opening their research centers (Facebook with FAIR directed by Yann Le Cun, as well as Amazon, Microsoft, Apple…).

  

• 2011 – The Artificial Intelligence Watson, developed by IBM, wins the American game show Jeopardy!

 

• 2016 – AlphaGo, developed by DeepMind and bought by Google: an AI beats the best players at the game of Go.

  

• 2018 – AlphaStar, a Deep Learning program also developed by DeepMind, launches into video games, notably with StarCarft II, and wins 10 consecutive games against a professional player. This success is considered as an achievement because the game implies being able to manage simultaneously and in real time dozens of units of different natures.

 

• 2019 – IBM’s Watson participates in an eloquence contest in which it debates against an American debate champion. Despite the fact that Watson did not win the competition, the scientific sphere is delighted with the AI’s performance, which is still very impressive. For example, AIs are able to process natural language and use it in a conversation. At the same time, Facebook’s AI, Pluribus, achieves a world first by beating 5 of the world’s best poker players simultaneously in a multi-player game.

   

Artificial Intelligence has undergone a complex and rocky development over the last decades. Today, we are witnessing a craze around this technology, and we see a real revival coming with the advanced technology of Deep Learning.

The word of the expert

Today, we are witnessing an exponential development of AI technology. Many are questioning the limits of this technology and we can imagine catastrophic scenarios on the future of AI. 

So, when will we see killer robots and evil AIs ?

« 

We are both far and close to knowing AIs capable of having the necessary intelligence to dominate the human species.

Far because even if the progress in AI is remarkable, we still don’t have a solution to provide an AI with a deductive intelligence able to give it common sense. And this revolution could take decades to be discovered. 

But we are also very close because Computer Science is a very young science and yet already revolutionary.

We must also differentiate between the ability to produce killer robots and the actual act of killing them. As a comparison of this catastrophic scenario, we now have nuclear weapons, but does this not prevent us from living at the same time as these weapons of mass destruction? No, and even more so, we have been able to give them a civilian use.

I think it would be the same with AI, except for one thing: not everyone can afford uranium, whereas developing a killer AI will require only 3 elements:

– A computer

– An Internet

– And a little curiosity…

To put it in perspective, I would like to say that we are still far from obtaining destructive AIs independently of human will, but once the scientific discovery is made, the accessibility to such a computer could have drastic consequences on our society. This is why it is crucial, in the race to develop AI, to integrate from now on and deep into the software bricks, a notion that guarantees the common good: ethics. »

Christophe Renaudineau, Associate Researcher of Robank Hood. 

Catégories : Non classé