Skip to main content
Mathematics and computation

Mathematics and computation

Experts debate the possible paths to human-like AI

01 Mar 2022 Sponsored by IOP Publishing

A stellar line-up of computer designers and bioengineers will go head-to-head in a virtual debate that will explore whether the biological brain should influence the design of future AI systems

computer intelligence concept
It's all in the mind: Neuromorphic engineering takes inspiration from the human brain in an attempt to design more efficient computer architectures. (Courtesy: Shutterstock/Jackie Niam)

Scientists and engineers continue to push the boundaries of what can be achieved with artificial intelligence (AI), with the last few years seeing impressive gains in areas such as speech recognition and natural language processing. But experts agree that the current state-of-the-art still falls some way short of the thinking machines that are widely depicted in science fiction.

“AI is very good for solving very specific problems, as long as there is enough data to train the system,” says Yann LeCun, chief AI scientist at Meta and a Turing Award Laureate for his research on deep learning at New York University. “But current systems do not understand how the world works, and that’s what is needed to realize transformative applications such as domestic robots, virtual assistants and fully autonomous self-driving cars.”

Scientists at the forefront of AI research are still figuring out how to make the paradigm shift from data-driven number crunching to more intuitive human-like thinking. Most researchers believe there is a role for computer algorithms that mimic the biological brain, with artificial neural networks becoming a mainstream approach for solving problems through trial and error rather than rules-based programming.

“Over the last few years there has been a lot of progress in self-supervised learning, where a system can learn to represent the data for a specific task without being trained,” says LeCun. “Once self-supervised learning can work more generally, machines will be able learn how the world works by watching videos – opening the door to solving problems much more simply than we can today.”

While LeCun believes that neural networks running on standard computer chips offer the best route to building next-generation AI systems, others contend that building processors that mimic the functionality of the biological brain would yield more powerful AI systems that also consume less energy.

Such “neuromorphic” processing systems typically exploit analogue circuits to create artificial silicon neurons, and mixed-signal analogue/digital circuits to implement spiking neural networks. These neuromorphic circuits are designed to replicate the way that synapses and neurons in the brain light up during neurological activity.

“In our brain there is no distinction between an abstract algorithm and the computing substrate,” explains Giacomo Indiveri, director of the Institute of Neuroinformatics at the University of Zurich and ETH Zurich, and editor-in-chief of a new open-access journal, Neuromorphic Computing and Engineering. “We separate the software and hardware in computer science, but in neuromorphic computing there is a merging of the two. To achieve the optimal solution we need to co-design the architecture and the computing substrate.”

Advocates for neuromorphic computing believe that in certain situations these bio-inspired systems have the potential to outperform standard digital technologies. “Neuromorphic systems can be implemented as massively parallel architectures in which artificial neurons are in different states and ready to fire within a few microseconds, providing a quicker response that can be typically achieved with conventional digital computation,” says Indiveri. “Analogue/digital spiking neuromorphic architectures have also been shown to consume power in the microwatt range, orders of magnitude lower than conventional digital processors.”

Indiveri believes that the sweet spot for neuromorphic systems lies in applications that require low latency and low power, such as localized processing of sensor data. “It’s more suited to processing continuous streaming data in real time,” he says. “That might be a vision system for gesture recognition, or detecting whether someone has fallen in the home. For biomedical signal monitoring, a low-power neuromorphic system implemented on a ‘wear-and-forget’ wristband could detect any anomalies and raise an alert, without needing to be connected to a mobile phone.”

But LeCun is not convinced that neural computing needs to be neuromorphic to be effective. “I am interested, but I am sceptical,” he says. “The question is whether you are better off exploiting the progress of digital technology or try to follow the neuromorphic philosophy.”

While LeCun agrees that neuromorphic systems could play a role in processing sensor data at the edge of the network, he believes that current analogue technologies have fundamental disadvantages for building larger neural nets. “There may be a good reason for the brain to produce spikes, but it might not translate to electronics and software,” he says.

These differing points of view will be aired in a virtual debate entitled “The future of high-performance computing: are neuromorphic systems the answer?”, which can be watched live on Monday 7 March 2022 at 4.00 pm GMT and then subsequently on-demand. LeCun will be in the sceptics’ corner, along with Bill Dally, a professor at Stanford University and chief scientist at NVIDIA – a company that designs the graphical computer processors that underpin many AI systems.

On the other side of the debate will be Kwabena Boahen, founder and director of the Brains in Silicon laboratory at Stanford University. Boahen and his team are developing silicon-based artificial neurons to emulate the way the brain works, and have demonstrated Neurogrid – a circuit board composed of 16 chips that each includes analogue circuitry for more than 65,000 artificial neurons.

“We’ve built hardware models of neural systems to learn about how the brain functions, which lets us test ideas about how cognition could come from the properties of neurons,” Boahen explained in an article for Stanford University. “Informed by what these models have taught us, we’re now working on developing a computer that works more efficiently, like the brain does.”

He will be joined by Ralph Etienne-Cummings, director of the Computational Sensory-Motor Systems Laboratory at Johns Hopkins University. Etienne-Cummings has studied bio-inspired vision sensors and their use in robots, and more recently brain–machine interfaces and neural prosthetics that are designed to restore function after injury or to overcome disease. His wide-ranging research has convinced him of the need for neuromorphic computing to “perform recognition tasks as effortlessly living organisms, create legged robots that are as efficient and elegant as humans, and design prosthetics than can seamlessly interface with the body.”

The discussion will be chaired by Regina Dittmann, an expert in memristive devices who is currently at the Peter Grünberg Institute of the Forschungszentrum Jülich in Germany. With Boahen and Etienne-Cummings attempting to convince LeCun and Dally of the benefits of neuromorphic computing over mainstream neural approaches, the session promises to be a friendly, but fiercely contested, debate. You can register for free now.

While the idea of neuromorphic engineering is rooted in concepts first proposed in the 1980s by, among others, microelectronics pioneer Carver Mead, the field has expanded over the years as new approaches have emerged. The term “neuromorphic” now describes any type of analogue, digital or mixed-mode implementation of a neural system, which includes physical devices that are designed to replicate the synapses in the brain, as well as digital chips – such as those demonstrated by Intel and IBM – that provide an electronic implementation of a spiking neural network.

The new journal, Neuromorphic Computing and Engineering, aims to represent this diversity of thought. “It’s the first journal that has tried to encompass all the different aspects of the field, from basic research about the brain to nanoscale technologies and high-performance computing that can hopefully exploit some of the concepts from spiking neural networks,” says Indiveri.

“We have received a high level of submissions and the articles published so far have had a good number of citations. I am happy to see that there is a lot of interest in the area in general and in bringing the different communities together through the journal.”

Register now for the online debate, which will be live on Monday 7 March at 16.00–17.30 GMT and subsequently available to view on-demand.

Copyright © 2024 by IOP Publishing Ltd and individual contributors