How to park a car with 12 neurons

It takes tens of thousands of artificial neurons and colossal computing power to learn to recognize a cat photo. Researchers have however succeeded in making a niche work thanks to a network made up of barely 12 neurons, drawing inspiration from the nervous system of a small worm a millimeter long.

And to say that the car manufacturers have implemented a riot of technology and decades to achieve cars that park on their own . And again, in most cases, it is a simple assistance: you always have to take care of the clutch, the accelerator and the brake. Well, imagine that a simple worm a millimeter long is capable of performing a niche with less than 12 neurons !

Researchers at the University of Vienna (Austria), in collaboration with the  Massachusetts Institute of Technology (MIT), have fabricated an ultra-simple neural network inspired by the C. elegans worm  , the only living organism whose nervous system has been fully modeled in 2019 . This small worm thus has between 302 and 385 neurons linked together by about 8,000 synapses . In a study published on the pre-publication site ArXiv , Ramin Hasani and his colleagues reproduced an electronic version of this nervous system comprising only 12 neurons, and they succeeded in teaching it to carry out simple tasks, like to maneuver a small car robotized according to a predefined path.

An evolving neural network whose connections vary over time

The interest of these works is obviously not to teach a worm to drive a car (even if?). It is above all to compare the functioning of artificial neural networks with that of the biological brain . ”  In our new architecture, the link between a neuron A and a neuron B of the lower layer is not constant, but it varies in time according to a nonlinear function “, explains Ramin Hasani.

This architecture enables the processing of information coming gradually, for example when someone delivered a live speech or when we have to adapt our behavior to an environmental movement. This kind of task requires so-called “recurrent” neural networks (RNN), in which information can propagate in both directions and which therefore make it possible to “remember” old information. But the “memory” of these RNNs is relatively limited (they “forget” information that is too far away) and they are unable to anticipate future information.