Science shifted: how to park a car with 12 Neurons

It takes tens of thousands of artificial neurons and massive computing power to learn to recognize a cat photo. Yet researchers have managed to make a niche through a network composed of just 12 neurons, inspired by the nervous system of a small worm a millimeter long.

And to say that the car manufacturers have implemented a riot of technology and decades to achieve cars that park alone. And again, in most cases, it is simple assistance: you must always take care of the clutch, the accelerator, and the brake. Well, imagine that a single worm a millimeter long is able to perform a niche with less than 12 neurons!

Researchers at the University of Vienna (Austria), in collaboration with the  Massachusetts Institute of Technology (MIT), have fabricated an ultra-simple neural network inspired by the C. Elegans worm, the only living organism whose nervous system has been fully modeled in 2019.

This small worm thus has between 302 and 385 neurons connected to each other by about 8,000 synapses. In a study published on the ArXiv pre-publication site, Ramin Hasani and his colleagues have reproduced an electronic version of this nervous system comprising only 12 neurons, and they have managed to teach him to perform simple tasks, such as maneuvering a small car. robotized according to a predefined path.

An evolutionary neural network whose links vary over time

The interest of this work is obviously not to teach a worm the driving of a car (though?). It is especially to bring closer the functioning of the networks of artificial neurons of that of the biological brain. ”  In our new architecture, the link between a neuron A and a neuron B of the lower layer is not constant, but it varies over time according to a non-linear function,” says Ramin Hasani.

This architecture makes it possible to process information that comes in a progressive way, for example when a person gives a live speech or when we have to adapt our behavior to a moving environment.

This kind of task requires so-called “recurrent” neural networks (RNN), in which the information can propagate in both directions and thus allow to “remember” old information. But the “memory” of these RNNs is relatively limited (they “forget” the information too far away) and they are unable to anticipate future information.