Gags at I.A.

Should we be surprised and complaining about the mistakes made by the new artificial intelligences based on deep Learning (IAAP), and in particular the autonomous pilots?

Contrary to what can be read in poorly informed journals, deep learning is a remarkably accurate simulation of the brain's natural neural networks, especially in the face of older methods. This technique allows the algorithms to build their own results from the data provided. They self-organize, in the same way as a juvenile human brain. The data are processed by successive layered algorithms, equipped with retro-control loops, the outputs of a first layer assembled in a more complex result by the second layer, etc… This process learns by testing the different possible solutions and selecting the one that seems to be best adapted from the viewpoint of the existing organization. Regardless of the layer where an observer takes the result, what is collected is an organizational essay and not the ultimate solution. The results tend to become more reliable as a large number of data of the same type are processed. However, there is always a forced limitation: The final layer where the IAAP is asked to produce the result. This will always be contingent on the extent of the data provided, in relation to the questions posed. The limits also come from the number of layers provided to the IAAP. They may be excessive or insufficient. The human brain has this specificity to make extra layers when it needs it. It increases its intelligence (not always enough for the hopes of its owner). Thus what is asked of the IAAP is eventually a hasty and poorly argued conclusion, due to lack of data and reflection.

Rather human terms, isn't it? Yes, we are very close to the functioning of a brain being learned. Is it surprising, under these conditions, to see the IAAP being wrong? The terms of his answer are almost those of a schoolmaster who would force a pupil to give the answer just when he did not learn his lesson. Without the set of data needed for a reliable response and imperative intelligence strata to process them, the task is impossible. But this is precisely what one asks the IAAP: to be able to detect regularities where the human spirit sees none. Certainly it can do this if the problem comes from the enormous mass of data to be processed, exceeding the capacities of the neurons but not those of the added digital circuits. But the IAAP will be just as helpless if the regularities are simply too imperceptible within the noise to be formally identified.

An AI based on deep learning makes mistakes, inevitably. It is in its principle of functioning, which makes it very close to the human brain, of which a notable characteristic… is to make mistakes. The IAAP is very different from the Logico-deductive classical AI, whose computations are reliable and constant (if the algorithms used are correct), but which are poorly adaptable to a situation for which they were not designed. The classical AI are horizontal intelligences, able to extend the calculation proposed by the observer, but not to leave it. Powerful to sort out raw masses of data, but not to organize them, unless they were clearly told how to do it. The IAAP, on the contrary, provides results where the classical AI can not deduce anything. But these are predictions. The error is possible. Frequent at the beginning, scarce over the many events encountered. Learning. Like the human brain. It's hard to blame an error on a teenage IAAP. Even as an adult, she can still do it. Like the adult human. Less often than not, because "human" error is often linked to factors outside the context: fatigue, emotion, panic, seizure of psychotropic drugs. Non-existent factors in IAAP.

Pragmatic Conclusion: A IAAP, in order to approach exponentially its maximum efficiency, matures in a simulator rather than in real conditions, if its errors could prove to be dangerous. You are immediately thinking of the automatic drivers of the vehicles, already accused of death of their passengers. Accusation both lawful and unfair, since replacing all human pilots with IAAP would dramatically reduce the number of deaths on the road. The modern human has not hesitated to entrust the computer algorithms its management, despite the bugs. Will they entrust their transport, despite the never-canceled spectrum of a computational error?

Leave a Reply

Your email address will not be published. Required fields are marked *