Skip to main content

Microchip produces laser light of different colours

Years of research and development work have made compact, high-performance lasers ubiquitous for the near-infrared wavelengths used in telecommunications. However, lasers with an output in the visible range of the spectrum rely mainly on atomic or solid-state systems. Such systems often require bulky table-top lasers or a series of different semiconducting materials to function.

A team of researchers at NIST in the US has now addressed this problem by making a microchip-based laser that converts near-infrared laser light into any of a range of visible light colours – including red, orange, yellow and green – using a technique called third-order optical parametric oscillation (OPO). The device could allow for a host of applications in areas such as spectroscopy, precision timekeeping and quantum information science.

Nonlinear material converts incident light into two different frequencies

The principle behind the third-order OPO technique is to use a nonlinear material – in this case, silicon nitride (Si3N4) – to convert incident light in the near-infrared into two different frequencies. One of these output frequencies is higher than the frequency of the incident light, in the visible range, while the other is lower and lies deeper in the infrared.

In the nonlinear regime, light with a high enough intensity exits an optical material with a wavelength that does not necessarily match the wavelength of the light that has entered it. This is because bound electrons in the material re-radiate the light at frequencies that are different from those of the incident light. This situation is very different to an ordinary (linear) optical material, which radiates the same colour (think of light bouncing off a mirror or refracting through a lens).

Microresonators produce output light of different colours

In their work, researchers led by Kartik Srinivasan directed a beam of near-infrared laser light into a microresonator. This is a ring-shaped structure with a radius of 50 microns fabricated on a silicon chip. The light inside this device circulates roughly 5000 times before dissipating, building enough intensity as it does so to access the nonlinear regime – where it is converted to the two different output frequencies.

The researchers fabricated several of these microresonators, each with slightly different dimensions on a chip. By carefully choosing these dimensions, they were able to ensure that different microresonators would produce different-coloured output light. This approach allowed a single near-infrared laser operating over a narrow range of wavelengths (from 780 nm to 790 nm) to generate visible-light colours ranging from green to red (560 nm to 760 nm), and infrared wavelengths ranging from 800 nm to 1200 nm. Any one of these wavelengths can be accessed simply by adjusting the dimensions of the resonators, Srinivasan explains.

While the work is currently still at the proof-of-principle stage, the researchers hope to combine their nonlinear optics technique with well-established near-infrared laser technology to create new types of on-chip light sources for a variety of applications. They report their work in Optica.

Machine learning spots topological phase transitions in experimental data

A machine-learning tool called diffusion maps has been used to identify topological phase transitions in experimental data. The research was done by teams led by Mordechai Segev and Ronen Talmon at the Technion-Israel Institute of Technology, who report their results in Physical Review Letters. Their method requires no prior knowledge about the system, and it found a phase transition in the data that was not predicted by current theory. Their method can potentially help analyse data from complex quantum many-body experiments and improve our understanding of topological phases.

A recent addition to the familiar solid, liquid or gaseous phases of matter, topological phases have a non-local nature that makes them both interesting and challenging to detect. Technion’s Eran Lustig, one of the lead researchers on this work, uses the analogy of a tornado: one cannot tell that a tornado is a huge swirling vortex from just a tiny patch of it. Topological phases are typically identified by studying the unique evolution of edge states of the system, which requires access to a significant part of the material.

Or Yair, the other lead researcher on the project, explained that the diffusion-maps algorithm is well suited to detect non-local signatures of topological phases. Given a set of experimental data points, it checks the local neighbourhood of a point to find nearby points and gradually zooms out to find relationships with points that appear far apart.

Haldane model

The researchers first tested their method on simulated data from a system with a known phase transition, called the Haldane model. It is a hexagonal lattice system with interactions between nearest neighbours and the next nearest neighbours. Its two triangular sublattices can be staggered in energy. As the amount of staggering passes a critical value, the system undergoes a phase transition from a topological phase to a normal phase.

To test the capabilities of diffusion maps, they used a bulk state excitation instead of one on the edge. The phase-transition location in the parameter space found by diffusion maps matched the theoretical position. Using bulk states for detection can be useful for systems with no well-defined edges like cold atom clouds or systems where edges are hard to access or control. They also probed the system using only one eigenstate of the system to show that the procedure works even with limited data.

Following the run on simulated data, the researchers used real data from a coupled waveguides experiment. With no information about the system or noise sources, their algorithm was able to find a “cusp” in the data – a signature that a phase transition had taken place. They also observed a weaker cusp related to another phase transition unaccounted for in the theoretical model. With this insight, the researchers were able to identify a higher-order process that produced that transition.

Machine-learning tools also help researchers with the interpretation of the large amounts of data generated from modern experiments with many degrees of freedom. The diffusion-maps method is a kind of manifold learning algorithm, which tries to find a lower-dimensional representation of data. A simple example of dimensionality reduction is that of a line drawn on a 2D plane – while it appears that one needs two co-ordinates to specify points on a line, in reality it is just a 1D object or “manifold” embedded in a 2D space.

The road ahead

Topological phases are receiving increasing attention in condensed matter and optical systems because of their association with robust physical phenomena. In these phases, some properties like edge transport or conductance remain unaffected even if one changes the shape of the system or introduces imperfections and other kinds of disorder.

In the past, scientists have tried using neural networks or unsupervised learning methods to identify phase transitions in experimental data, but these methods need trainable data or knowledge of the experimental set-up. The diffusion-maps method, on the other hand, does not need any prior knowledge, and is inherently designed to extract non-local features in the data.

The researchers find their results very promising. “I think it’s really important to discuss the vision and the surprises”, Segev comments, “[we want] to be able to use this method to identify phase transitions in quantum many-body systems – where the theory is incomputable, and it is virtually impossible to make real predictions for the experiments.”

Deep-learning-based algorithm helps radiologists detect cerebral aneurysms

Researchers in China have developed a deep-learning-based algorithm that could help radiologists detect potentially life-threatening cerebral aneurysms on CT angiography images.

Cerebral aneurysms are weak spots in blood vessels in the brain, which can balloon out and fill with blood. If such a bulging aneurysm leaks or ruptures, it can cause serious symptoms and sometimes be fatal. The risk of rupture depends on the size, shape and location of the aneurysm, making detection and characterization of cerebral aneurysms vital.

CT angiography, which uses X-ray CT to visualize blood vessels following injection of contrast into the bloodstream, is usually the first-line imaging exam for detecting cerebral aneurysms. But this can be a challenging task: the complexity of intracranial vessels and the small size of cerebral aneurysms means that some may be missed during an initial assessment.

As such, the researchers propose that deep learning – a type of machine learning that’s increasingly used to develop algorithms for image recognition – could enhance radiologists’ performance and reduce the number of aneurysms that are initially overlooked.

“In our daily work we are always faced with cases in which some important lesions have been missed by the human eye,” says senior author Xi Long from Tongji Medical College‘s Union Hospital, in a press statement. “Cerebral aneurysms are among those small lesions that may be overlooked on the routine assessment of radiological images.”

Reporting their work in Radiology, Long and colleagues developed a detection algorithm based on a convolutional neural network. To train and assess their deep-learning algorithm, they used 1068 head CT angiography images with reported cerebral aneurysms, acquired by four different scanners at two hospitals. Half of these CT angiograms, which included 688 aneurysms ranging in size from 1.2 to 45.6 mm, were used to train the algorithm. The other half, including 649 aneurysms of 1.2 to 30.8 mm in size, formed the validation dataset.

Examples of false-positive aneurysms

After training the algorithm, the team used it to detect cerebral aneurysms in the validation dataset. The algorithm demonstrated a maximum sensitivity of 97.5% for aneurysm detection, with a rate of 13.8 false-positive findings per case. These false-positives were observed in areas with bony structures, vessel bifurcations and curvatures, and calcified plaques. The authors note that most of these could easily be identified by radiologists.

The algorithm’s performance gradually improved in sensitivity with aneurysm size, reaching 100% in aneurysms 10 mm or larger. The lowest sensitivity was observed in aneurysms located on the anterior and posterior cerebral arteries, most of which were smaller than 3 mm. The algorithm also found eight new aneurysms, six of which were smaller than 3 mm, that had been overlooked by the human readers in the initial radiological reports.

The researchers next performed an external validation, using an additional 400 CT angiograms. Of these, 188 contained aneurysms (206, of 1 to 22 mm in size) and 212 were negative. The validation was performed by four radiologists with between one and seven years of experience in head CT angiography.

Each radiologist was randomly assigned 200 CT angiograms for interpretation with/without help from the algorithm. For each image, they recorded the type, number, location and size of any aneurysms, and the diagnosis time. Two weeks later, they interpreted the same samples again, without/with the algorithm.

Assistance from the deep-learning algorithm improved the radiologists’ performance in detecting the cerebral aneurysms, increasing their overall sensitivity per lesion from 79.09% to 88.94%. The sensitivity per case was 81.63% and 91.86%, without and with the algorithm, respectively. In particular, performance was most improved for the two least experienced radiologists.

The researchers emphasize that the algorithm is intended to act as a supportive tool, providing the radiologist with a second opinion to improve their diagnostic accuracy, but not replacing the human reader.

“The deep-learning algorithm needs to be further validated on external imaging data,” Long tells Physics World. “Some improvements, such as decreasing the false positive rate, may be needed before it can be used clinically.”

Inside the quantum bubble

What happens when the irresistible force of quantum entrepreneurship meets the immovable object of a global pandemic?

For the past few years, the UK’s annual Quantum Technology Showcase has taken place at the QE2 conference centre in London, a stone’s throw from the Houses of Parliament, with attendees packing the exhibition hall to hear about the latest academic and industrial research into this growing field. This year, of course, it didn’t look like that. Instead, the presenters appeared by video link from home offices and deserted labs, sharing their experiences with unseen audience members scattered across the UK and beyond.

Under the circumstances, then, it was strange – and strangely comforting – to hear the leaders of four brand-new quantum-tech firms discussing much the same problems as entrepreneurs always do. For Richard Murray, the chief executive of London-based ORCA Computing, the biggest challenge he faces is finding and hiring people with the right skills and experience. “There are lots of people with PhDs,” he said. “Where the UK does need to boost its competitiveness is the more industrial-experience skill sets.” Optical engineers, he added, are “hard to come by” even for a firm like his that has won awards (and nearly £3m in venture capital funding) for its optical-fibre-based quantum computing technology.

Another panellist, Max Sich, said that he is most concerned about the relative immaturity of quantum technologies – including the semiconductor platform that his Sheffield-based firm, AegiQ, is developing for quantum cryptography systems. “Quantum is still a very risky technology compared to other things people invest in,” he observed. “We sit here in our quantum bubble and we all believe in it. Not everybody does.” The standard logic of a start-up is to grow fast and get big, but Sich noted that quantum technologies are mostly too undeveloped to achieve that yet. “Seed” funding and grants from the UK government via the funding agency Innovate UK, he added, help to bridge the gap.

The third panellist, Anthony Laing, was something of an outlier in the group. Although he co-founded his photonic quantum computing firm, Duality Quantum Photonics, in February 2020, he remains part of the academic world, as a physicist at the University of Bristol. Yet even for academics like him, he said, “entrepreneurialism in quantum technologies is in the air”. His current goal is to design and fabricate photonic quantum processors that can simulate phenomena relevant to drug design in the pharmaceutical industry. When asked what he hoped to bring to next year’s showcase, he suggested that Duality Quantum Photonics might have a prototype in 2021 – one year into the company’s five-year plan.

The session’s final speaker, Ramy Aboushelbaya, is the co-founder and chief exec of a University of Oxford spin-out called Quantum Dice. As the company’s name implies, the main product Aboushelbaya and his colleagues are building is a quantum random number generator (RNG) for encryption systems. Like AegiQ, Quantum Dice received support from Innovate UK, and Aboushelbaya described such grants as “a validation” that helped them attract additional private funding. He and his team of five are now hoping to hire more electronic and photonic engineers, with the aim of bringing their first commercial RNG to next year’s showcase.

The coronavirus-shaped elephant in the room finally made an appearance at the end of the session, in remarks by Roger McKinlay, the challenge director for quantum technologies at UK Research and Innovation. “It kind of hurts to have seen so many friends and not be able to loiter and chat to find out how life is going, not just how business is going,” McKinlay lamented. Nevertheless, he added, there has been “much progress in 12 months” towards developing “a vibrant quantum start-up culture” in the UK – a point corroborated by session moderator Anke Lohmann, who noted that her consultancy, Anchored In, has identified six quantum start-ups that incorporated in the UK in 2020.  “We live in uncertain times, but in the midst of a crisis, we’ve seen so many good and healthy signs,” McKinlay concluded.

The hunt for another Earth: a love story

Information poster about PSO J318.5-22

It’s not often I cry when reading a book about physics. Massachusetts Institute of Technology (MIT) planetary scientist Sara Seager’s memoir is a love story, where love comes in multiple forms: romantic love, parental love, friendship, plus love of nature, of the universe and of maths. Those last two are hardly a surprise, coming from a celebrated scientist, but her enthusiasm for her chosen career is especially warm and welcoming. Though it isn’t her excitement about stargazing that provoked my tears.

The Smallest Lights in the Universe: a Memoir opens with Seager recounting a particularly bad day six months after the death of her husband. She was 40 years old, with two young sons, and describes her grief that day as “ugly”. She compares the dark and empty feelings she was going through to the rogue planet PSO J318.5-22 – an exoplanet that isn’t part of a star system, and so is perpetually shrouded in darkness, with a surface covered in molten-iron rain. It’s a powerful description; difficult to visualize and yet brilliantly evoked by Seager.

This narrative pattern of looking at her personal life and her work in parallel to elucidate both is an effective tool that Seager uses repeatedly. Some readers may think this book contains a little too much of the personal, slightly outweighing the physics, but I strongly suspect this is deliberate. Physics is studied by people, and their personal lives have an impact on their work lives. It is valuable to have a renowned scientist acknowledge those links. Between her unconventional childhood and late-in-life diagnosis of autism, Seager knows that she sees the world differently from a lot of people. This has at times made it difficult for her to connect with people, but it has helped her to imagine ways to find and study exoplanets that others didn’t think of, or rejected.

Choosing to study exoplanets in the mid-1990s as a graduate student was an unusual and potentially foolhardy move

Choosing to study exoplanets in the mid-1990s as a graduate student was an unusual and potentially foolhardy move, as it was still an emerging topic scorned by many. Seager trusted her own gut instinct and that of her academic adviser Dimitar Sasselov, ignored the naysayers and set to work on writing computer code to study Hot Jupiters – so-called because these planets are about the same size as Jupiter, but orbit their host stars much more closely. When she defended her PhD thesis in 1999 the 100-seat auditorium at Harvard was packed – exoplanets were still a fringe topic but a fascinating one that was gathering pace.

One of the many ways in which this memoir is unusually honest is that it gets into the nitty gritty of how physics research progresses. Seager talks about the importance of finding the right mentors and collaborators, as well as the different work environments at different institutions. Early in her career, she and some of her close colleagues were sniped to publication or positions in working groups by older, better known scientists (invariably male) and she expresses her anger as well as the lessons she learned. But for the most part she is generous about her colleagues.

According to Seager, astrophysicists need large egos to think themselves capable of understanding the universe, but they also need to accept that their role might be to plant the seed that another future scientist follows to make an important breakthrough. It sounds idealistic, but those are traits that she attributes to her own mentors – particularly John Bahcall, who gave her a job at the Institute for Advanced Study in Princeton early in her career – and tries to cultivate in herself. She was brave and/or lucky to choose a specialism as fast-moving as exoplanets, but she knows that much of what she hopes to see and learn about exoplanets won’t happen in her lifetime.

Perhaps that is why Seager, like many scientists, works on several overlapping projects at a time. Some of them fail or stop getting funding; some move to other institutions or change their focus; some she gets to see to completion. She compares her choice of projects to an investment portfolio: there are the safe bets that can be a little dull; the medium-risk that are more interesting; and the “big swings” that are high in risk and reward.

Sara Seager

Just as a successful career can rest on finding colleagues with the right skills who can be trusted, Seager discovers that the same is true outside of work. Until her husband became seriously ill, he was the only person she considered to be her friend. The people she subsequently found through necessity and accident, to help her with practicalities, often became friends in time.

As someone who had always worked long hours of intense concentration, Seager found that grief and sole parenting made her re-evaluate her work–life balance. Her research was still of great importance – indeed, having meaty problems to tackle at work was one of the ways in which she coped with loss – but it could no longer consume all her time and energy. Work travel had to be limited and often combined with family holidays. For the first time in her career, she missed out on being part of a project she had helped to design because she missed a deadline. She had to let that project go and move on, just as she was eventually able to move on from grieving and open herself up to new possibilities.

At the end of The Smallest Lights in the Universe Seager remains enthusiastic and eager to learn more about exoplanets. They are no longer fringe, and her dream of finding a truly Earth-like planet seems entirely possible. But that dream was based in a wish to find extraterrestrial life, which she long ago calculated might be very different from life on Earth, and therefore might not require a planet that is anything like Earth. How do you narrow down what to look for if you don’t know what you’re looking for? It’s a thorny problem that Seager is well placed to solve. Or perhaps she will plant the seed for someone else to solve it many years from now. Maybe she already has.

  • 2020 Penguin Random House 320pp $28/£16.99hb

Electrically tuneable network learns fast

Deep neural networks

A team of researchers at the University of Twente in the Netherlands has employed deep learning – a form of artificial intelligence, or AI – to optimize the structure of nanoelectronics devices for the first time. According to team leader Wilfred Van der Wiel, the same method could also be used to tune quantum dot systems for quantum computing and should be generally applicable to other large-scale physical systems for which many control parameters are likewise unavailable.

Earlier in 2020 Van der Wiel and colleagues created an electrically tuneable network of boron dopants in silicon. This device contained eight terminals, or electrodes: seven that act as voltage inputs and one that serves as a current output. The signal of the output comes from electrons “hopping” from one boron atom to another in a way that somewhat resembles the way neurons in our brain “fire” when they perform a task. So, while the network is not ordered, it has an output signal, and it is possible to “steer” this signal in the desired direction by changing the voltages on the control electrodes.

This process is also called artificial evolution, Van der Wiel explains, and although successful it proved cumbersome in practice. “While not as slow as Darwinian evolution, it is still quite time-consuming to have the network do what you would like it to do,” he tells Physics World.

Making use of deep-learning neural networks

For this latest work, the Twente team turned instead to deep neural networks (DNNs). These networks have become increasingly common tools in scientific research, where they are typically used to model complex physical phenomena after an initial period of “training” on examples drawn from experimental data.

To generate a DNN model of their nanoelectronic device, Van der Wiel and colleagues began by measuring the device’s output signal for many distinct input voltage configurations. They then used these data and standard deep-learning techniques to train their DNN model to “understand” how the real device behaves. The resulting model predicts the output current of the device, given the input voltage configuration. Van de Wiel explains that this approach is very similar to the way that standard AI tasks are solved using DNN. “In standard deep learning, we have to find the parameters of the model itself,” he says. “These are the weight factors between neurons and the threshold values of the neurons. This is what we learn in the first phase: we find the parameters of the DNN model itself, so that it mimics the physical device.”

100 times faster than artificial evolution technique

In the second phase of their work, the researchers keep the parameters of the DNN model constant so that they can learn the optimum control parameters. “With this model, we can search for a desired functionality,” Van der Wiel says. “For this, we choose some of the inputs as control parameters and others as data inputs. We search for the control values on the DNN model again using deep learning.  Mathematically this boils down to the same thing: again, we have to optimize parameters to find the desired output.”

Wilfred Van der Wiel

Once they have used the DNN to find the best values for the control parameters, the researchers can apply these values as voltages to the corresponding terminals in the real nanoelectronic device. Since the DNN is a model of the physical device, its output current should then behave in the desired way.

The group found that this new approach works about 100 times faster than the artificial evolution they used before. “With this new method we can optimize nanoelectronics devices with many terminals and even optimize systems in which many complex nanoelectronics circuits are coupled,” Van der Wiel explains. “Such systems are expected to increase in complexity in the coming years in, for example, novel information processing technologies like quantum computing and neuromorphic computing.”

Scalable and applicable to complex tasks

Van der Wiel and colleagues also found that their approach worked for tasks of increasing complexity. After successfully demonstrating Boolean gates on their nanoelectronics device, they went on to show that the device can perform binary classification and a “feature map” task in which 2 × 2 patches of pixels are mapped to a current value (a subtask of a higher-level image classification task). These last two tasks would have been very challenging to complete without this new method, the researchers say.

The dopant network devices described in this study, which is published in Nature Nanotechnology, could be used for neuromorphic computing in the future. As a next step, the team plans to build more energy-efficient large-scale systems of interconnected dopant network devices for state-of-the-art AI performance.

Ferroelectricity: 100 years since its discovery

Today, ferroelectric materials have some amazing applications. You will find them in many technologies including telescope optics, medical ultrasound devices and energy storage devices. But when ferroelectricity was first observed 100 years ago by the physicist Joseph Valasek, the discovery was greeted with little fanfare among the physics community.

Watch this video to discover how ferroelectricity transformed from academic obscurity to a key part of several transformative technologies. For a deep dive into the history of ferroelectricity take a look at this feature from the November 2020 issue of Physics World, written by Amar S Bhalla at the University of Texas at San Antonio and Avadh Saxena at Los Alamos National Laboratory.

Radiative cooling boosts solar cell voltage by as much as 25%

Cheap and simple radiative cooling technologies can significantly increase the performance and lifespan of concentrated photovoltaic systems, according to researchers in the US. They found that a simple radiative cooling structure can increase the voltage produced by the solar cells by around 25%. It also reduced operating temperatures by as much as 36 °C and the scientists claim this could dramatically extend the lifetime of photovoltaic systems.

Commercial silicon-based photovoltaic cells convert around 20% of solar irradiation that falls on them into electricity. Much of the rest is turned into heat, which must be effectively managed. “Photovoltaic efficiency and lifetimes both decrease as temperature goes up – especially in humid environments,” explains Peter Bermel, an engineer at Purdue University. “The loss in efficiency is fundamental to how photovoltaics work.”

This is a particular issue for concentrated photovoltaic systems. While using mirrors or lenses to focus sunlight on solar cells can boost efficiency, it also increases heating. This can offset efficiency improvements and damage the photovoltaic cells, reducing their lifespan.

Complicated and expensive cooling

Current approaches to cooling such systems include forced air or liquid cooling, or the use of heat sinks for conductive heat transfer. But active cooling systems use energy and therefore reduce the net efficiency of the system. They can also be complicated and expensive, increasing system costs and overall reliability.

Another potential option is radiative cooling. Using thermal radiation to dissipate heat requires no additional power and the materials that enable it are often low cost.

To test the performance of radiative cooling, Bermel and his colleagues created a simple concentrated photovoltaic system. In their set-up, a mirror reflects sunlight upwards and through a lens to focus it on a solar cell. Using this design, they tested four cooling set-ups: natural convective cooling with a heatsink, no cooling, radiative cooling, and radiative cooling combined with convective cooling. Radiative cooling was achieved by sandwiching the solar cell between two layers of soda-lime glass, which is known to be a good broadband radiative cooler.

These setups were tested outside, with multiple experiments conducted on different days in various conditions covering a wider range of heat loads. The results are reported in the journal Joule.

Temperature drop

The researchers found that radiative cooling resulted in a 5–36 °C drop in the temperature of the system, depending on weather conditions, compared with the set-ups without radiative cooling. Bermel told Physics World that the largest temperature difference was recorded with radiative cooling on its own, but the lowest absolute temperatures occurred when it was used in tandem with convective cooling.

These temperature drops caused a relative increase in open-circuit voltage for the solar cells of between 8–27%. This is “roughly proportional to efficiency,” Bermel says. Using temperature data from the experiments the scientists also simulated the impact of cooling on the lifespan of the solar cells. This suggests that radiative cooling could extend the lifetime of concentrated photovoltaic cells by a factor of 4–15.

According to the researchers, the results demonstrate that radiative cooling provides benefits in all weather conditions. But Bermel’s colleague, graduate student Ze Wang, also at Purdue University, cautions that radiative cooling probably will not be suitable for cooling concentrated photovoltaic systems on its own. Other systems would be needed to ensure cooling in all conditions.

Auxiliary cooling mechanism

“Radiative cooling is a very good auxiliary cooling mechanism, which requires no extra energy, performs well at high temperatures, and adds little weight to the whole system,” Wang says. “However, in most cases, radiative cooling serves as an add-on to the existing cooling system utilizing convection or conduction, in order to improve the overall performance.”

However, radiative cooling does not perform well in low-temperature conditions, Wang explains. This is because the temperature difference between the solar cell and the air is too small to fully exploit the potential of radiative cooling. This is a particular issue when there are no low-temperature absorbers, such as a clear sky, around.

Radiative cooling materials are also not limited to the soda-lime glass. “We could work on the materials or structures of the coolers in the future to further improve the emittance profile,” Wang says.

Ferroelectricity: 100 years on

Great discoveries are sometimes made without anyone realizing quite how important they will be. C V Raman, for example, won the Nobel Prize for Physics in 1930 for discovering that light can change energy when it scatters, yet Raman spectroscopy did not become a valuable research tool until well after the laser was invented in 1960. Similarly, few could have imagined that Paul Dirac’s far-fetched yet bold proposal of antiparticles – for which he won the 1933 Nobel prize – would lead to positron emission tomography half a century later.

But there is a lesser known – yet important – discovery that also went largely unrecognized at the time. It was made 100 years ago in 1920 by Joseph Valasek (1897–1993), who was then a graduate student working under the supervision of William Swann at the University of Minnesota, Minneapolis, US. Seeking to develop a seismograph to measure the vibrations from earthquakes, Valasek wondered if this could be done with piezoelectric crystals, which create an electric signal when squeezed.

The most readily available piezoelectric he had at hand was a single-crystalline substance first synthesized in the 17th century by Pierre Seignette, a pharmacist from the French seaport of La Rochelle. Extracted from wine, it became known as Rochelle salt or Seignette salt and has the chemical formula potassium sodium tartrate tetrahydrate (KNaC4H4O6·4H2O). When Valasek placed a sample of this material in an electric field, E, he noticed that its resulting electric polarization, P, did something unusual.

As he turned up the field, the polarization increased, with the graph of P versus E following an S-shaped curve. However, when the field was lowered again, the polarization was always higher than before albeit following the same kind of curve. In other words, the precise value of the polarization depended on whether the field was rising or falling: it was showing hysteresis (figure 1). So unusual was this observation that Swann presented it at the April 1920 meeting of the American Physical Society in Gaithersburg, Maryland, in a paper entitled “Piezoelectric and allied phenomena in Rochelle salt”. (As a lowly PhD student, Valasek did not even attend the meeting.)

Swann and Valasek did not know what caused the hysteresis, but there were parallels with a discovery that had been made three decades earlier by the Scottish physicist James Alfred Ewing. He had seen a similar kind of behaviour in certain ferromagnets, noticing that the magnetic moment depends on how the magnetic field has changed. Valasek’s discovery therefore pointed to an entirely new class of materials, in which the electric dipole moment – and hence the polarization – depends on how the electric field has changed.

PWNov20Bhalla-fig1

Steady success

Now called “ferroelectrics”, these materials have some amazing applications in modern life (see “Applications of ferroelectrics: five of the best”). However, neither Swann nor Valasek had heard of the term, which had been coined in 1912 by Erwin Schrödinger after predicting that certain liquids can spontaneously polarize when they solidify. What’s more, Valasek’s discovery went largely unnoticed. Despite him writing four papers about his observations in Physical Review between 1921 and 1924 with a further note in Science in 1927, no attempts were made to establish the theoretical basis for this phenomenon throughout the entire 1920s.

Most physicists, it seems, were more interested in quantum physics and other fundamental phenomena like Bragg diffraction and Raman spectroscopy. Indeed, it was not until the late 1930s that anyone actually used the word “ferroelectricity” again in the literature. Research only really took off after the future Nobel-prize-winning physicist Vitaly Ginzburg wrote a classic paper on the subject in 1946, though even he called it the “Seignettoelectric” effect given that it had been first observed in Seignette salt.

The field was also boosted by the discovery during the Second World War of another ferroelectric material: barium titanate (BaTiO3). Unlike Rochelle salt, it is insoluble in water, chemically stable at room temperature, and has much better electrical and mechanical properties. Barium titanate was therefore a perfect material for high-energy-density capacitors, although it was only after the war that researchers realized it was ferroelectric with a tell-tale hysteresis in its electrical properties.

Theorists now began to develop a proper understanding of the behaviour of ferroelectrics, helped by experimentalists who started carrying out careful crystallographic analyses of the structure of these materials. By the end of 1950s, several hundred different oxide-based ferroelectric materials – belonging to about 30 different structural families – had been discovered, with physicists testing their electrical properties and weighing up their potential for novel device applications.

One consequence of this systematic study of ferroelectrics came in 1968 when researchers such as Keitsiro Aizu from the Hitachi Central Research Laboratory in Tokyo, Japan, predicted that there could be a similar hysteresis-like relationship between a material’s elastic strain and its applied stress. Dubbed “ferroelastics”, some of these materials are unusual in that if you cool them below a specific temperature and then mechanically distort them, they’ll recover their original shape if you heat them back up again.

These ferroelastics, in other words, “remember” their original physical and geometric shape. They include “shape-memory alloys” such as nickel-titanium, which is widely used for actuating and positioning devices, while others are used in everything from electric cables on the ocean floor to bendable spectacle frames. Ferroelastics are even used in space to form antennas and other gadgets that can be folded up and then unfurled when heated up.

Meet the family

By the late 1960s, physicists therefore knew of three families of materials that all showed hysteresis: ferroelectrics, ferromagnets and ferroelastics. What they all have in common is that neighbouring crystalline domains have a particular property “pointing” in opposite directions (electric dipole for ferroelectrics, magnetism for ferromagnets, and strain for ferroelastics) that can be “switched” with an external field so they all point in the same direction. Indeed, Ginzburg – and another future Nobel laureate, Lev Landau – were able to explain the behaviour of all three types by a single, simple, phenomenological theory.

Some scientists even started grouping the materials under the common banner of “ferroics” – a name that stuck in the literature despite many of the substances not actually containing any iron. Indeed, in the 1970s a fourth family of ferroic materials, known as “ferrotoroidics”, was also discovered, which have a hysteresis in the toroidic field (the cross product of the electric and magnetic field). Including materials such as lithium cobalt phosphate (LiCo(PO4)3) they have magnetic vortices in neighbouring domains that can be made to line up.

And if that was not enough, researchers have also found materials that combine more than one ferroic property either in a single phase or as a composite structure. Known as “multiferroics”, they include “magnetoelectric” materials in which the magnetization can be controlled by an electric field and the polarization can be manipulated by a magnetic field (something that Pierre Curie had suggested as far back as 1894). Such materials can, for example, measure the picotesla-sized magnetic fields from human neurons at room temperature.

What’s most interesting about ferroelectrics is that such materials are also piezoelectric (generating electricity when stressed) and pyroelectric (generating electricity when subject to a variation in temperature). These unique properties have led to ferroelectrics being used in many applications from high-energy-density capacitors and night-vision devices to ultrasound medical equipment, smart technologies for energy harvesting, and actuators and translators. You’ll even find ferroelectrics in burglar alarms, lighters, and heart-rate and blood-pressure monitors.

The future is ferroelectric

A century after the discovery of ferroelectricity, what started as a niche field of research has grown enormously, with more than 20,000 research papers published on the topic to date, driven by its myriad of applications from the nano- to the macroscopic scale. It has even expanded into biology, with ferroelectric behaviour found to occur, for example, in amino acids and in the wall of the aortic blood vessels in pigs. Ferroelectrics could even be used to make sensors that can replicate many human “multifunctional sensory systems”.

Other interesting developments include exotic materials such as “relaxors” (in which the dielectric response depends on the frequency of the applied field) and “quantum paraelectricity” (in which quantum fluctuations suppress the onset of ferroelectric order). Researchers have also started to study 2D ferroelectrics, with atom-by-atom deposition and first-principles calculations pointing to new kinds of nanoscale devices and sensors that could be particularly useful for studying the human body. After all, skin, hair, nails and many other biological tissues behave as piezoelectrics and ferroelectrics when exposed to an electric field, with piezoresponse-force microscopes already providing quantitative data on human biofunctionality.

Even fundamental physics has not been immune from the power of ferroelectrics, with researchers recently observing exotic topological defects called “polar skyrmions” and “polar hopfions” in ferroelectric materials for the first time. What started out as an innocuous experimental observation by a graduate student a century ago will, we believe, continue to benefit science, technology and life for another 100 years and beyond.

Applications of ferroelectrics: five of the best

High-energy capacitors and efficient energy storage devices

C0490617-Perovskite_structure,_illustration

One big benefit of ferroelectric materials is that they have a very high dielectric constant, which means they can store lots of energy. Most capacitors in high-energy-density applications, such as compact batteries, therefore contain ferroelectric materials. And despite behaving as insulators with very high electrical resistance, ferroelectrics also played a key role in the discovery of a new class of materials with zero resistance. Working at IBM’s Zurich research lab in the mid-1980s, the future Nobel laureate physicist Alex Müller was studying perovskites – a group of materials that includes ferroelectrics. By tweaking the composition but maintaining their basic structure, he found that these materials carried current without resistance at about 40 K, while others found similar behaviour at liquid-nitrogen temperatures. So for high-temperature superconductors, we can thank ferroelectrics.

Night-vision technology

night-vision-deer-1125095505-iStock_Pixel-Productions

Cameras that can “see” at night require materials that generate electric charge in response to variations in temperature. Pyroelectrics, which generate a voltage when heated or cooled, can do the job, but it is better to use ferroelectrics such as triglycine sulphate. They have a much higher “pyroelectric coefficient” and can resolve temperature differences as small as 0.01 K. Infrared radiation from, say, a human body can be focused onto arrays of ferroelectric materials, which absorb the light and turn it into a voltage that can be used to create an image corresponding to the person’s temperature profile. Such cameras are also used in medicine, security and night vision. Zoologists have even used night-vision devices to see animals that they previously thought were extinct, including wild dogs in New Guinea.

Medical ultrasound and underwater acoustics

ultrasound-scan-pregnancy-457162975-iStock_monkeybusinessimages

All ferroelectric materials are piezoelectric, which means they generate an electrical voltage when put under pressure by an object. The voltage can then be used to create an image of the object. However, the pressure doesn’t have to be through direct physical contact: it can also come from sound waves reflected off an object that itself is under stress. Ferroelectrics are therefore widely used in medicine for imaging unborn babies to check how they are growing and developing inside the mother’s womb. A similar principle lies behind the hydrophone: a device that can collect sound waves bouncing off underwater objects, such as schools of fish. Ferroelectrics have also been used to map the topography of the ocean floor – such as in 2014 when they were used to locate Malaysian Airlines flight MH370, which disappeared somewhere in the southern Indian Ocean on a flight from Kuala Lumpur to Beijing.

Actuators and translators

PIA22913

Given that all ferroelectrics are piezoelectric, if you apply an electric field, the material will change dimension along one or more permitted directions as determined by its basic crystal structure. The change in size can be barely a few picometres per volt – but that can still be invaluable. Ferroelectrics such as lead zirconium titanate, for example, are used in atomic force microscopes to see individual atoms in materials and also in scanning tunnelling microscopes, for which Gerd Binnig and Heinrich Rohrer won the 1986 Nobel Prize for Physics. Similar materials can also be found in piezoforce microscopes and magnetoforce microscopes. Indeed, another ferroelectric – lead magnesium niobate/lead titanate – was part of the device that NASA used in 1991 to correct flaws in the mirror on the Hubble Space Telescope. Previously washed-out images, such as of the core of galaxy M100, were now much clearer (compare above left and right).

Energy harvesting

traffic-lorry-1216033265-iStock_RistoArnaudov

Ferroelectric materials can generate electricity under the influence of an input thrust, meaning that some – such as lead zirconium titanate embedded in a polymer – could be used to harvest the energy from cars and lorries that is otherwise lost as heat or noise. The power that can be generated from such devices is currently relatively small – typically a few milliwatts – based as it is on sheets of polyvinylidene difluoride (PVDF) and their polymer composites. But if we can find cheap ways to scale up the production of devices, we could be on to a winner. Another promising application of energy-harvesting devices is in medicine and biology, where only very small energies are involved. They could be a boon for patients who have been fitted with battery-powered mechanical pacemakers to keep their hearts pumping. If the batteries run out, the only way to replace them is for a surgeon to operate on the patient. But if the batteries could be recharged by the voltage generated in a ferroelectric material directly from the thrust of the heartbeat, such operations would be a thing of the past.

Mid-IR spectrometer provides non-invasive skin cancer detection

Researchers in Israel have developed and tested a fibre-optic evanescent wave spectroscopy (FEWS) system that can non-invasively identify and characterize skin cancers. They successfully identified cancerous lesions on patients’ skin by touching the suspicious regions for 30 s with an optical fibre connected to a mid-infrared (IR) spectrometer.

An estimated 300,000 cases of melanoma and more than one million cases of non-melanoma skin cancer were diagnosed in 2018, according to the World Cancer Research Fund and the American Institute for Cancer Research. Suspicious lesions are usually identified by dermatologists using a dermascope, a handheld optical magnifier, and are subsequently diagnosed based on pathological analysis of biopsied tissue. This process, however, is invasive, costly, time-consuming and dependent upon the skill of the physician.

To address these shortfalls, Abraham Katzir of Tel Aviv University and co-researchers are working to create an accurate, affordable clinical system that can identify skin cancers in near-real-time and could be used by dermatologists for reliable screening of suspicious lesions. Writing in Medical Physics, the researchers describe their achievements to date.

The FEWS system is based on a mid-IR spectrometer and a long, U-shaped, mid-IR transmitting AgCIBr fibre. The fibre, developed by Tel Aviv University’s applied physics group, is flexible, non-toxic, non-hygroscopic and highly transparent in the mid-IR. The system operates in the 3–30 μm spectral range, to measure the mid-IR absorption spectra of tissues. To record an absorption spectrum, the centre of the U-shaped fibre is simply brought into contact with the skin.

Patient studies

The researchers aimed to use the FEWS system to detect and identify three skin cancers: melanoma, basal cell carcinoma (BCC) and squamous cell carcinoma (SCC). For 90 patients in the dermatology department of the Tel Aviv Sourasky Medical Center, they measured the absorption spectra of each suspicious skin lesion and the surrounding healthy tissue, a process that took about 30 s per measurement.

The absorption spectra of the measured samples exhibited several peaks in the mid-IR region. Unique peaks at similar wavenumbers and within specific spectral ranges, were easily identified by the naked eye as signatures of melanoma, BCC or SCC. The team created “biochemical fingerprints” for samples of melanoma, BCC, SCC and healthy tissue, based on differences in the absorption spectra. These spectra corresponded with the pathology of the patients’ tissue biopsies, which included five melanomas, seven BCCs and three SCCs.

In a recent refinement of their diagnostic method, the researchers developed an algorithm to analyse the spectra and deliver the results. For melanoma, this algorithm achieved 100% sensitivity, specificity and accuracy.

One potential obstacle with this approach is that mid-IR radiation only penetrates only a few microns into the stratum corneum, the upper layer of the skin. The researchers suggest that this thin layer would include cells that have migrated upwards from deeper malignant lesions.

“This study confirmed that the melanoma-affected cells migrate from the layers of the skin to the thin top layer,” explains Katzir. “We were able to use mid-IR spectroscopy on this very thin layer of skin to detect these cells, without breaking the skin, and from that to determine the type of cancer present. The differences among the cancer types were large and were immediately noticed by simple comparison of the spectra. This non-invasive ‘spectroscopic pathology’ may, in the future, replace the standard invasive biopsy.”

Abraham Katzir

Katzir tells Physics World that the team is planning to conduct a larger number of experiments in a collaboration with the Sheba Medical Center, which has a large medical clinic dedicated to skin cancer.

“Our main interest is to develop a system that will automatically diagnose lesions, independent of the skill of the physician,” Katzir explains. “It will determine if the lesions are cancerous, and conclude whether they are melanoma or less lethal cancers, all in real-time and in the clinic. The dermatologist community and the medical authorities are looking for a method that will diagnose, with a success rate of more than 95%, both malignant and benign skin lesions. If we succeed, we will try to get an approval for the method and then we will be ready for commercialization.”

The next step will be to collect more data to improve the data analysis. The system also needs to be refined. The researchers plan to replace the commercial mid-IR spectrometer with a quantum cascade laser (QCL)-based system that will only cover the spectral range of interest, have higher intensity and be more compact. They also plan to improve their software algorithms to better detect melanoma and other pathologies, and to generate these findings automatically.

“We hope to develop a system that is small, lightweight, rugged, very easy to operate and, most importantly, inexpensive,” says Katzir. “We want to reduce the cost to be less than £4000 (or $5000), so that small clinics can afford to purchase it. This system has the potential to make a sea change in the diagnosis of skin cancers and possibly other types of cancer. This is our goal.”

Copyright © 2025 by IOP Publishing Ltd and individual contributors