By studying an artificial neural network, researchers in the US may have gained a better understanding of how and why our memories fade over time. Led by Ulises Pereira-Obilinovic at New York University, the team has found evidence that the stable, repeating neural patterns associated with newer memories transform into more chaotic patterns over time, and eventually fade to random noise. This could be a mechanism used by our brains to clear space for new memories.
In some models of the brain, memories are stored in repeating patterns of information exchange called “attractor networks”. These form within webs of interconnected nodes that are used to represent the neurons in our brains.
These nodes convey information by emitting signals at specific firing rates. Nodes that receive signals will then generate their own signals, thereby exchanging information with their neighbours. The strengths of these exchanges are weighted by the degree of synchronization between pairs of nodes.
Stable patterns
Attractor networks form as an external input is applied to a neural network, which assigns an initial firing rate to each of its nodes. These frequencies evolve as the weights between different pairs of nodes readjust themselves, and eventually settle into stable, repeating patterns.
To retrieve a memory, researchers can then apply an external cue that is similar to the original input, which kicks the neural network into the relevant attractor network. Multiple memories can be imprinted onto a single neural network, which naturally switches between stable attractor networks over time – until an external cue is provided.
These systems have their limits, however. If too many attractor networks are stored on the same neural network, it may suddenly become too noisy for any of them to be retrieved, and all its memories will be forgotten at once.
Losing memories
To prevent this from happening, Pereira-Obilinovic’s team suggest that our brains must have evolved a mechanism for losing memories over time. To test this theory, the trio, which also included Johnatan Aljadeff at the University of Chicago, and Nicolas Brunel at Duke University, simulated neural networks in which the weights between connected nodes in an attractor network will gradually diminish as new memories are imprinted.
All-optical network mimics the brain’s neurons and synapses
They found that this caused older attractor networks to shift into more chaotic states over time. These networks featured faster fluctuating patterns. These patterns of firing signals never perfectly repeat, and can coexist far better with newer, stable attractor networks. Eventually, this increasing randomness causes older attractor networks to fade into random noise, and the memory they carry is forgotten.
Altogether, the researchers hope their theory could help to explain how our minds are able to constantly take in new information, at the price of losing older memories. Their insights could help neurologist to better understands how our brains store and retrieve memories, and why they ultimately fade over time.
The research is described in Physical Review X.