“Will machines replace humans? “It is 1940, and Franklin D. Roosevelt, in a debate with the President of MIT, is concerned about the impact of machines on the unemployment curve. Sixteen years later, at the Dartmouth conference, artificial intelligence (AI) officially became a scientific discipline. This time, robots will certainly replace humans. It is only a matter of years, perhaps months… Sixty years later, our colleagues at work are made of flesh and blood, and our ambitions are finally reduced. The latest example to date: the autonomous car, which was planned for 2020 4 years ago. A word of advice: keep your driving licence for at least ten years.
Yes, it is likely that one day, artificial intelligence will do everything as well as a human being, as Geoffrey Hinton, winner of the 2019 Turing Prize, predicted. But, it’s not a matter of months, or even years… Half a century? A few hundred years? More? It would be very risky to start making estimates.
Researchers face several major obstacles that force AI to remain at a “weak” stage. Algorithms are currently able to solve “specific” problems for which they have been trained (e. g. games), to interpret sensory data in a summary way (speech and visual recognition) and even to generate voices, texts or images, as Samsung recently presented it, among others. Deep learning, based on artificial neural networks, has taken us a step further in recent years. But just because an AI is able to beat the world champion of the Go game or any human being in chess does not mean that it is “strong”. So go ask AlphaGo to remember your shopping list to convince yourself, you won’t risk coming back from the busy supermarket. It is even estimated that the IQ of artificial intelligence today would be equivalent to that of a 4-year-old child (result that must be interpreted with caution, as the AI tested has been programmed specifically for the skills assessed by the test). This is reassuring to President Roosevelt.
4 major areas are not yet mastered by this “4 year old child”, on which we humans have a serious advantage.
– Independent planning and adaptation. Imagine a robot cleaning agent in a park. If he soon has no battery left, he will not be able to generate a plan to recharge himself. Programmers will need to have a system in place to identify and get to the charging area. And if one day this area is out of order, it will not be able to adapt on its own, unless the case has been planned in advance by the designers of the algorithm – where a human agent would have no problem imagining a backup plan if the sandwich shop in the park is closed. In short, in an uncertain environment, our artificial intelligences no longer have much intelligence. Admittedly, the combination of current AI techniques of “deep learning” and “learning by strengthening” allows our robot to learn from its environment and the changes that are carried out there, but only in closed environments with fixed and known rules, such as on the board of a go or chess game for example – not on a road network, where the unexpected can happen at any time.
– The ability to learn with few examples. Let’s take this same robot, in this same park. To identify the approaching dog as a potential threat, he had to ingest millions of photos with and without dogs beforehand. Because today, however “intelligent” they may be, our algorithms need a lot of examples to be able to recognize what a dog, a tree, a table is. The four-year-old did not need thousands or millions of examples of dogs to recognize one. A research axis, called “transfer learning”, would allow our robot to learn to recognize the environment in which it finds itself, however varied it may be, from a limited number of examples.
– Learning from explanations. Today, AI algorithms are trained only with examples but cannot take full advantage of a conceptualization of what they have learned. You can tell a child that a panther is a big cat, and that a boat has no leg, otherwise it would walk. He will therefore be able to recognize panthers and will not expect to see a picture with a boat in shorts. A machine cannot do this, it will not be able to identify a panther if it has not seen many examples before and will never be bothered by a picture of a catamaran running around.
– Explanability of results. Humans are usually able, when asked, to explain at least partially why they made a particular decision. The most advanced AIs are very poor teachers when it comes to explaining how they have solved a problem. This is a concern at a time when they are attending more and more bankers, insurers and doctors. Modern deep learning algorithms are made up of millions of artificial neurons that have organized themselves among themselves, and which once trained operate in “black box” mode: even those who designed them cannot easily interpret the results of their operation. This is both practical (they can solve very complex problems very quickly) and extremely problematic: how to justify to your client this loan refusal decided by an algorithm? And how can we understand that this autonomous car has decided on an extremely risky manoeuvre, at the cost of material or even human damage? Even if it is the right decision, how can we trust an artificial intelligence when it makes a diagnosis contrary to that of a specialist doctor?
These blockages make today’s artificial intelligences “weak” AIs, over which humans have the upper hand. Especially since researchers are faced with a problem of theoretical formalization of the algorithms used. Some theorems exist, but our expertise is still based largely on empirical knowledge and not always on implacable mathematical theories. We grope, we move forward, we modify, to finally achieve our goals. Because if machines sometimes misunderstand humans, humans also have difficulty understanding how their machines work.
Recent Comments