A supercomputer simulates a network of 77,000 neurons in real time

Researchers have succeeded in manufacturing an artificial neural network representing 1 mm2 of the cerebral cortex and capable of processing information at the same speed as biology, thanks to a new architecture. A performance that illustrates the tremendous progress made in recent years by artificial intelligence to mimic the human brain.

Reproducing the biological functioning of the human brain is the dream of all researchers in artificial intelligence. A team from the University of Manchester claims to have succeeded in mimicking the functioning of a small part of the  primary motor cortex, either 77,000 neurons or 300 million synapses .”

This is the first time that we can replicate the human brain in real-time with a speed equivalent to that of biology, ” says Oliver Rhodes, the author of the study published on the arXiv website. The feat was realized thanks to the SpiNNaker of the University of Manchester, equipped with 57,000 neuromorphic chips, one million cores. Oliver Rhodes and his colleagues managed to run their model for 12 hours with constant speed.

A division of tasks between neurons

In this model representing 1 mm 2 of the cortex , the artificial neurons are arranged in layers and interconnected by multiple connections. Each neuronal nucleus receives electrical impulses either directly from the neurons to which it is directly connected, or delayed signals from other synapses. When the accumulation of signals exceeds a certain threshold, it unlocks a response in the form of an electrical pulse. “

The main problem with brain simulations is the peaks of activity that a neural nucleus has to deal with to decide which neurons should be stimulated in return. Oliver Rhodes says. This overload results in an increase in processing time and a loss of efficiency.

To circumvent this difficulty, the researchers developed a strategy called “heterogeneous parallelization”, where the neurons work simultaneously in a cooperative way: some are responsible for updating the neuronal state, others manage the direct signals and other “packets” of delayed signals. Not only does this reduce the load assigned to each kernel, but the signal packets are now directed to each kernel based on their type (inhibitor or exciter). ”  Inhibitory synaptic nuclei therefore only treat peaks reported as inhibitors, ” and not all signals, Oliver Rhodes says.

When artificial neural networks reach their limits

If neuromorphic chips are of interest to researchers, it is because artificial silicon neural networks are a real energy sink when it comes to performing millions of calculations in parallel. ”  The traditional computers on which they are run are based on an architecture dating back to the 1950s, which separates the memory and the computer center into two distinct blocks,” says Damien Querlioz, a researcher at the Center for Nanosciences and Nanotechnology at the CNRS.

As a result, ”  every neuron has to look for data stored sometimes far away at the microelectronic scale.”. As the number of calculations increases, it leads to terrible “traffic jams” that consume a lot of energy to access memory.

The AlphaGo program, the Google  AI that recently beat the biggest go champions, consumes 10,000 times more energy than a human. In the brain, computational and memory tasks are combined: schematically, neurons played the role of calculator and synapses that of memory. This architecture is less good at calculating, but much more efficient when it comes to recognizing a cat on an image.

According to Markus Diesmann, a researcher at the Jülich Research Center in Germany who designed the SpiNNaker model, a neuromorphic artificial brain like the one developed by Oliver Rhodes could equip robots that would then become (almost) as talented as humans. We are still a long way off: the human brain contains no less than 100 billion neurons and a million billion synapses, far from the 77,000 neurons of this artificial minicerval .