Skip to main content




Matrix factorization algorithms help track neuronal activity

07 May 2020 Isabelle Dumé
neurons - 4422514 - iStock_ktsimage
Neurons. (Courtesy: iStock/ktsimage)

A new technique that efficiently retrieves scattered light from fluorescent sources can be used to record neuronal signals coming from deep within the brain. The technique, developed by physicists at Sorbonne University in Paris, France, uses matrix factorization algorithms to overcome the fact that opaque biological tissues are strong scatterers of visible light, and thus hard to image except at shallow depths.

Brain imaging has traditionally relied on non-optical techniques such as X-ray computed tomography and magnetic resonance angiography. Thanks to its unprecedented combination of contrast, resolution and specificity, fluorescence-based imaging in the visible and near-infrared regions of the electromagnetic spectrum (400-900 nm) is an attractive alternative – especially for studying information processing by neurons. There is, however, a serious drawback: neuronal tissues are opaque at these wavelengths, quickly scattering any incident light. This opacity limits optical imaging techniques to depths of about a few hundred microns, which corresponds to a few scattering lengths.

While researchers have developed techniques to focus light and image at greater depths, most of these methods rely on complex wavefront-shaping techniques. These techniques are also time-consuming, which means that they cannot be used to monitor real-time neuronal activity in the brain.

Analysing the activity of deeply buried neurons

The new approach, developed by Claudio Moretti and Sylvain Gigan in the Kastler-Brossel Laboratory, is different in that it does not aim to retrieve an image of a fluorescent object, nor indeed to localize its position. Instead, it relies on analysing the activity of deeply buried fluctuating sources – in this case, the functional activity of a set of neurons – by recording their fluorescence.

The researchers exploit the fact that each source will generate an extended, low-contrast but well-defined pattern of light after scattering through a thick opaque medium such as brain tissue. This so-called speckle pattern can be imaged at a detector or camera.

Moretti and Gigan have shown that they can use these fluctuating speckle patterns to extract functional signals from fluorescence sources, even when the light has passed through highly scattering tissue. They did this by making use of an advanced signal-processing algorithm known as non-negative low-rank matrix factorization.

Characteristic fingerprint

This algorithm is fairly widely employed in image and music analysis, as well as in other areas of machine learning and big data, Gigan explains. It essentially tries to factor a large matrix — the sequence of images recorded – into a product of two “thin” rectangular matrices.

“The ‘non-negative’ in this context means that we are looking for a solution in which both smaller matrices have positive coefficients,” he says. “This approach enormously simplifies the computations and the algorithm finds a solution even if there is a lot of noise in the data (as in our case).”

The main reason why such a factorization technique works, Gigan adds, is that the huge matrix they are trying to factor can be assumed to come from a limited number of sources (the neurons), each of which has its own characteristic “fingerprint”.

“While such an algorithm has been used in neuroscience applications before, it has never before been applied to a such a ‘stringing’ scenario,” he tells Physics World. “In our experiment, the signals are completely mixed by the propagation of the functional light signals through the scattering medium – something that occludes and prevents direct imaging. We have shown that the algorithm effectively ‘de-mixes’ these signals so they can be efficiently retrieved.”

Proof-of-principle experiment

To test their approach, the researchers designed a proof-of-principle experiment in which they simulated the activity of a small network of synthetic neurons made from fluorescent beads 10 microns in size – roughly the same as most neuron bodies. They placed these beads in an ex vivo mouse skull that was around 300 microns thick and had a scattering length of about 40 microns.

They then excited the beads using blue laser light and collected the resulting fluorescence speckles using first a microscope objective and then a camera. Finally, they used their algorithm to extract information about the light emission and how it varied with time.

The technique proves that even strong light scattering does not fully destroy the information carried by the light and that it can be retrieved using computational means, Gigan says. “Using fluorescent beads that we could excite at will allows us to understand the physics at hand and determine the limitations of the technique,” he adds. “The most obvious application for the technique is in neuroscience optogenetic studies, but we hope that it will be used in fields outside of biomedical imaging too, such as in sensing, for example.”

The Sorbonne team, who report their research in Nature Photonics, are now working with biologists to apply the technique to real neurons in living systems. Another important caveat is that they did not actually image the neurons or their localization, but only recorded their activity. “We don’t really know where the neurons are or what they look like, and this is definitely a problem that we hope we can crack in the future,” Gigan says.

Copyright © 2020 by IOP Publishing Ltd and individual contributors