Skip to main content
Detectors and sensors

Detectors and sensors

Perovskite sensor sees more like the human eye

18 Jan 2021 Isabelle Dumé
image for retinomorphic sensor
Bio-inspired design for photosensitive perovskite-based capacitors could enable light sensors that respond only to movement. (Courtesy: Pixabay)

A new type of sensor that closely mimics how the human eye responds to changing visual stimuli could become the foundation for next-generation computer processors used in image recognition, autonomous vehicles and robotics. The so-called “retinomorphic” device is made from a class of semiconducting materials known as perovskites, and unlike a conventional camera, it is sensitive to changes in levels of illumination rather than the intensity of the input light.

The eyes of humans and other mammals are incredibly complex organs. Our retinas, for example, contain roughly 10photoreceptors, yet our optical nerves only transmit about 106 signals to the primary visual cortex – meaning that the retina does a lot of pre-processing before it transmits information.

Part of this pre-processing relates to how the eye treats moving objects. When our field of view is static, our retinal cells are relatively quiet. Expose them to spatially or temporally varying signals, however, and their activity shoots up. This selective response – transmitting signals only in response to change – enables the retina to substantially compress the information it passes on.

Mimicking mammalian visual processing

In recent years, this mammalian optical sensing process has caught the attention of computer scientists. Traditional computer processors – known as von Neumann machines after the mathematical physicist John von Neumann, who pioneered their development in the mid-20th century – deal with input instructions in a sequential fashion. In contrast, the mammalian brain processes inputs via massively parallel networks, and studies have shown that computers that follow suit – neuromorphic computers – should outperform von Neumann machines for certain machine-learning tasks in terms of both speed and power consumption.

Retinomorphic sensors – optical devices that attempt to mimic mammalian visual processing – are a potential building block for such computers, and thin-film semiconductors such as metal halide perovskites are considered good candidates for making them. Materials of this type are attractive because they can be tuned to absorb light over a wide range of wavelengths. They have also already proved themselves in artificial synapses that react to light, albeit in structures that are generally designed for transmitting and processing information rather than for optical sensing. However, while researchers have previously used perovskites to make optical sensors that mimic the geometry of the eye, the fundamental operating mode of these sensors still requires sequential processing.

Spiking sensor

A team led by John Labram at Oregon State University in the US has now shown that a simple photosensitive capacitor can reproduce some characteristics of mammalian retinas. The new device is made from a double-layer dielectric: the bottom layer, silicon dioxide, is highly insulating and hardly responds to light, while the top layer is the light-sensitive perovskite methylammonium lead iodide (MAPbI3).

The team found that the capacitance of this MAPbI3-silicon dioxide bilayer changes dramatically when exposed to light. When Labram and his student Cinthya Trujillo Herrera placed it in series with an external resistor and exposed it to a light source, they observed a large voltage spike across the resistor. Unlike in a normal camera or photodiode, however, this voltage spike quickly decayed away even though the intensity of the light remained constant. The result is a sensor that responds, like the retina, to changes in light levels rather than the intensity of the light.

Filtering out unimportant information

After measuring the light response of several such devices, the team developed a numerical model based on Kirchoff’s laws to show how the devices would behave if they were arranged in arrays. This model enabled them to simulate an array of retinomorphic sensors and predict how a retinomorphic video camera would react to different types of input stimuli. One of their tests involved analysing footage of a bird flying into view (see https://aip.scitation.org/doi/10.1063/10.0002944 for image). When the bird stopped at a (static, and therefore invisible to the sensor) bird feeder, it all but disappeared. Once the bird took off, it reappeared – and, in the process, revealed the presence of the feeder, which became visible to the sensor only when the bird’s take-off set it swaying.

“The new design thus inherently filters out unimportant information, such as static images, providing a voltage solely in response to movement,” Labram explains. “This behaviour reasonably reflects optical sensing in mammals.”

The researchers, who report their work in Appl. Phys. Lett., say they now plan to better understand the fundamental physics of these devices and how their signals would be interpreted by image-recognition algorithms. They also hope to address some of the challenges associated with scaling these devices up to sensor arrays. “Going from a brand-new device paradigm to a working array is almost certainly going to expose challenges we haven’t yet considered,” says Labram. “There are also quite a few operation-related questions we will have to answer – in particular as regards performance limits, stability, predictability and device-to-device variability,” he tells Physics World.

Related events

Copyright © 2024 by IOP Publishing Ltd and individual contributors