Neuromorphic computers: a solution to energy-hungry artificial intelligence?

This article is part of a ditycon about so-called neuromorphic computers, which in design resemble our own brain. These types of computers are theoretically powerful, energy efficient and can even communicate directly with our bodies. Researchers are looking not only at silicon systems, as described in this article, but also at computers made from organic materials. NEMO Kennislink editor Esther Thole visited Yoeri van de Burgt’s laboratory at Eindhoven University of Technology, where they make the building blocks for organic computers.

Unlock your phone with your face, find your favorite music, talk to a digital assistant on your phone or drive a car with steering assistance – all thanks to artificial intelligence, which is a bit like the information processing in our brain, and which has become important in countless computer applications in about twenty years.

Russian top chess player Garri Kasparov in 2005. The chess champion was defeated in 1997 by the computer Deep Blue. It was the first time that a computer was stronger than a human master.

One disadvantage is that artificial intelligence consumes energy. Researchers have calculated that fully self-driving electric cars probably spend ten to thirty percent of their energy on the computer that controls the car. Or take the computer program AlphaGo, which defeated a human Go champion for the first time in 2016. It ran on about 2,000 processors, which reportedly consumed one million watts of electricity – quite a lot compared to the about 20 watts that its human opponent needed.

Many applications of artificial intelligence take place in large data centers where there is a lot of energy available (one to two percent of the total global electricity consumption currently goes to data centers). But it makes us dependent on fast and reliable connections. Wouldn’t it be much smarter to do the kind of calculations where they are needed? For example, a camera that interprets what it sees itself will no longer have to share all its data with a computer located further away.

Computer power is now accumulating mainly in large technology companies. Researchers are working on parts of computers that can reverse this trend: computers that are ultra-efficient and still have the power for many of the above applications. Researchers draw inspiration for this from our own brains, which are fundamentally different from classical computer architecture. A pile of small gold particles appears to behave like a group of artificial brain cells.

A brain in silicon

At first glance, a classic computer seems fundamentally unsuitable for simulating a brain. It’s true that an average computer chip has billions of transistors (which you can imagine as small contacts), but they work differently than neurons, the parts that make up our brain.

A transistor receives a signal and can either transmit it or block it. It’s like a floodgate that is completely open or completely closed: the transistor passes a ‘1’ or ‘0’, all or nothing. A neuron also sends a signal, but it does so only when it reaches a so-called threshold value. Think of it more as a living cannon and ‘fire’ when fired from a series of entrances at the same time gets a signal.

In addition, neurons retain their ‘states’, the connections between the neurons (also called synapses) are adaptable and more or less easily transmit signals depending on all the previous signals they have transmitted. In this sense, a transistor has no memory.

Strong together

A simplified schematic representation of a neural network as it is located in the brain. The red ‘brain cells’ receive a signal (left) and transmit it to some extent to the blue and yellow cells, respectively. They eventually produce a result (right).

Artificial intelligence costs a lot of energy because it rests on a wide range of calculations from a large pile of data. Professor of nanoelectronics Wilfred van der Wiel from the University of Twente says that it is actually about multiplying huge series of numbers (so-called vector matrix multiplications). Multiplication always requires that a number be retrieved from memory before it is sent back to memory. “There are many treatment steps, and the only reason it works reasonably well at the moment is that a computer performs these steps in quick succession,” Van der Wiel says.

So take our brains. In terms of computational speed for a few neurons, this may not match the billions of computational steps that a modern processor goes through per second. second. The speed of the neurons is in the order of several hundred ‘operations’ per second. The big trump card, however, is that the brain performs enormously many arithmetic steps at the same time. It is not necessary to move numbers to and from memory (see also box) Brain in silicon), because the processing and storage of information takes place in the same place. It turns out to be effective.

Researchers are now checking it out and making computers more ‘parallel’ so they can perform multiple computational steps at the same time. This can be done, for example, with so-called graphic processing units (GPUs), processors that originate in the gaming world and specialize in many parallel computing steps. There are GPUs on the back of Tesla’s and AlphaGo also made use of them. But if computers are to be as efficient as the brain, an additional approach to brain architecture is needed. The transistor must be overboard.

Schematic representation of a programmable ‘brain cell’ of gold nanoparticles (center) powered by eight surrounding electrodes. The circuit is currently operating at a temperature of -196 degrees Celsius.

Gold particle brain cells

The computer in Van der Wiel’s laboratory has no transistors. Computer is perhaps also a big word, because so far the circuit has a maximum of twelve inputs and outputs connected to a so-called nanomaterial of gold particles. With a little imagination, you can see it as a collection of a series of brain cells.

The gold particles are twenty nanometers in diameter and lie on an insulating substrate of silicon oxide. Electrodes run to the gold particles from different sides. These include inputs that transmit a signal. Ultimately, based on these inputs, it produces an output, a signal that is passed on to the next network.

The circuits are programmable by applying a certain voltage to the control electrodes. This changes the way the current flows through the gold particles: both its path and its resistance. Van der Wiel and colleagues have already shown that in this way it is possible to create ‘logic’ circuits that are in computers that require many more classical transistors.

In the end, it’s just not meant to mimic the classic computer. “It was a proof of principle the programmability of the circuit. The ultimate application will not consist of these circuits, ”says Van der Wiel.

Wilfred van der Wiel says that applications of energy-efficient neural networks in particular can make a difference in places where a lot of computing power is required but limited energy is available, such as in cameras in self-driving cars that are able to recognize other road users.

Forget chips

One of the challenges is to make the system larger so that it can have more complex functions, says Van der Wiel. One brain cell does not make a brain, and the power of these types of systems is precisely the large number of devices that process information simultaneously. Van der Wiels and colleagues’ hundreds of nanoparticles can be a bit lean compared to the hundreds of billions of brain cells in a human brain. “We want to scale up, and that is a challenge. The question is how do we give our network more connections and how do we connect different of those networks, “says Van der Wiel.” Signals from this network also threaten to become immeasurably small. “

Another point is the memory of the networks, which is missing. First, the researchers ‘program’ the heap of gold nanoparticles by applying different voltages to the electrodes until the nanoparticles give the desired response. But after this phase of material learning the material may lose the desired program again. “We are now looking at how we can capture the functionality, for example with so-called phase shift materials, which are also included in rewritable CDs.”

Van der Wiel sees a long way to go in applications in this regard. He does not dare say how long these will take. “Five years, fifty years, who can tell? In any case, we’ve made significant progress over the last ten years. At the time, it seemed to me that we could learn chips with a random collection of nanoparticles a certain behavior. And that’s what’s happening now. “

A resistor with memory

Researchers at the University of Groningen are also working on components for neuromorphic computer chips. We have now succeeded in creating a connection in a material that has the learning properties as a connection between brain cells in our brain. The special thing about such a connection is that it adapts and remembers its state: for example, if many signals pass through it, the connection automatically becomes stronger. In the absence of a signal, it becomes weaker.

In the computer world, this is called a memristor – a combination of words memory (memory) and resistance (resistance). In Groningen, researchers make such a memristor of a piece of nickel on a substrate of strontium titanium oxide. The resistance of this material turns out to depend on the current that has previously passed through it. Not only can this material ‘learn’, but also ‘forget’, just as connections in the brain do. Researchers in Groningen hope to reduce their memristors for use in a neuromorphic chip.

Sources:

  • C. Kaspar, BJ Ravoo, WG van der Wiel, SV Wegner, WHP Pernice, The emergence of intelligent matter, Nature (2021), doi: 10.1038 / s41586-021-03453-y
  • Toss T, Taatgen, TF Tiotto, AS Goossens, J. Borst, T. Banerjee, NA Learn to approximate functions using the Nb-doped SrTiO3 memristors, frontiers in Neuroscience (2021), doi: 10.3389 / fnins.2020.627276

Leave a Comment