Skip to main content

A glimpse of Einstein in Zurich

Zurich is one of my favourite cities. Located in the north of Switzerland, it sits at the tip of a glistening, clear lake, with snow-topped mountains in the distance. The buildings are beautiful, the people are friendly and public transport is incredibly efficient. It is also home to multiple world-class science and technology institutes.

So when Physics World was invited to visit two of these facilities, I jumped at the chance to go. Our hosts for the trip were the international organization IBM Research, and ETH Zurich, a STEM-focused university, and the event was a showcase for some of their medical, computer science and quantum-computing research.

With a day at each institution, I and about 25 other journalists from around the world were treated to a packed schedule of talks, lab tours and demonstrations, as well as an almost endless supply of incredibly interesting science. Topics ranged from diagnosing breast cancer with artificial intelligence and treating diseases with nano-robots, to using blockchain to automatically pay for parking and applying machine learning to see how galaxies evolve.

A prototype electric car that uses Blockchain technology to securely pay for tolls and parking (Courtesy: Sarah Tesh)

In among the fast-paced timetable, we got to venture into the depths of the ETH Zurich’s historical main building. After following a maze of passages that I’m pretty sure I would not have been able to escape from, we ended up in a surprisingly bright, white and modern reading room, connected to the university’s archives. And on the table in front of us were treasures from perhaps the most famous ETH alumnus, Albert Einstein.

The incredible artefacts included photographs of Einstein with his family, postcards between him and his friends and his university exercise book. Among the selection, two pieces stood out. One was a report book listing Einstein’s grades from when he was a student at ETH, which was then the Swiss Polytechnic Institute. Marks were given from 1 to 6 and among Einstein’s 4s and 5s, there were, as expected, 6s. But there was also a dark, bold 1. This was for “Physics practical course for beginners”. Obviously, it wasn’t that Einstein was bad at physics, he just didn’t go to class and consequently got reprimanded. Indeed, it turns out that Einstein often missed lectures, preferring to study at home, and instead relied upon his friends’ notes in the build-up to exams.

Albert Einstein's grades for 1898 to 1899

My other favourite document was a letter Einstein wrote to Conrad Habicht in 1905. This year is often referred to as his annus mirabilis or wonder year. In 1905, while working at the patent office, Einstein wrote five significant works on photons, special relativity and the reality and size of the atom. And in this letter, in rather illegible handwriting, he discussed four of his five momentous ideas. During two days filled with state-of-the-art research and technology, it was fascinating also to see these artefacts that relate to key moments in the history of physics.

A letter from Einstein to Habicht in 1905

Climate change to give planes a bumpy ride

Clear-air turbulence in some of our most congested aviation routes is set to double if current greenhouse-gas emissions continue, according to a new climate simulation. Unless action is taken to reduce these emissions, airlines will need better turbulence forecasts and passengers must prepare for a rougher ride.

Today, turbulence accounts for 64% of weather-related accidents in the aviation industry and is the leading cause of serious injuries to flight attendants.

Luke Storer of the University of Reading, UK, and collaborators compared computer simulations of turbulence under pre-industrial conditions and for a possible future climate of 2050–2080 if present-day emissions continue. If that’s the case, the team found, the frequency of severe turbulence is set to increase by 110% over North America and 160% over Europe, some of the world’s busiest international airspace.

This global approach builds on previous work that suggested a worsening of turbulence in specific places or times of year. The new study not only corroborates these findings, but demonstrates the much broader scope of the effect. All severity categories of turbulence are set to increase in volume, meaning aeroplanes will encounter them more frequently. What is now considered severe turbulence is predicted to become as common as today’s moderate turbulence, placing airline staff and passengers at greater risk.

Clear-air turbulence, the type studied here, is invisible and cannot be detected by onboard radar. Currently it’s tackled primarily with predictive forecasting, a process that’s skilled but has significant room for improvement. Storer and colleagues’ results emphasize the importance of improving this system to reduce the risk of injury to passengers and crew.

The study also indicates that designers working on the next generation of commercial airliners must prepare for a more turbulent flying environment. The worst-case emissions scenario this study uses may not come to pass if the world acts on greenhouse-gas emissions, but it could be wise to prepare for a bumpy ride.

Using the HadGEM2–ES atmospheric model developed by the UK’s Met Office, the researchers compared the turbulence present in two simulated atmospheres. One had pre-industrial greenhouse-gas levels. The other assumed greenhouse-gas emissions followed Representative Concentration Pathway 8.5 of the Intergovernmental Panel on Climate Change (IPCC). This implies total greenhouse-gas concentrations (including carbon dioxide and other gases such as methane) equivalent to 1370 ppm of carbon dioxide by 2050, and global warming of around 2 °C over pre-industrial temperatures.

Storer and colleagues published their work in Geophysical Research Letters.

Experiments shed new light on how life-giving carbon is forged in stars

Two teams of nuclear physicists have carried out the most sensitive measurements yet of how an excited form of carbon-12 known as the Hoyle state breaks down into its three constituent alpha particles. The results provide a better picture of the structure of excited states in carbon and other nuclei and improve our understanding of the fusion processes that forge new elements inside stars, say the researchers.

Carbon is essential for life on Earth and is created via nuclear fusion within red-giant stars. In what is known as the triple-alpha reaction, one helium nucleus (alpha particle) joins to another to create beryllium-8, which then combines with a third helium nucleus to form carbon-12. But there is a problem with this basic picture. Beryllium-8 appears too short lived to account for the large amounts of carbon in the solar system – its decay within a mere 8 ×10–17 s giving it only a miniscule chance of merging with an alpha particle before it disappears.

In 1953 the British astronomer Fred Hoyle proposed that this problem could be overcome if the fusion of beryllium-8 with an alpha particle generates an excited state of carbon-12 that very quickly decays to carbon’s ground state via the emission of a pair of gamma rays. Known as the Hoyle state, this was subsequently observed in the energy spectra of nuclear collisions carried out in the laboratory. To this day, however, the structure of the Hoyle state remains a puzzle.

Clusters or shells?

The conventional model of nuclear structure tells us that protons and neutrons act as individual particles that fill up “shells” like electrons in an atom. But this model comes nowhere close to correctly predicting the Hoyle-state energy level. An alternative scheme in which protons and neutrons cluster together to form three alpha particles within the carbon nucleus gets much closer to the correct excitation energy. But it remains unclear whether carbon-12 is made up entirely of clusters or whether it is partly cluster-like and partly a collection of individual protons and neutrons. There is also uncertainty over how the clusters interact with one another.

One way of trying to clear the fog is to dismantle the Hoyle state into its three constituent alpha particles to find out the relative frequency – or “branching ratio” – of two competing decay routes. One route is the (much more common) two-step route in which the excited carbon-12 nucleus first gives off a single alpha particle to create beryllium-8, which then breaks up in two further alphas. The other route is a single step in which the original nucleus breaks apart into three alpha particles simultaneously. The reason for doing this is that different models of the Hoyle state predict different values for the branching ratio.

Measuring the branching ratio involves creating multiple Hoyle states by firing a beam of nuclei at a suitable target and measuring the alpha particles given off. Distinguishing between the one-step and two-step processes relies on being able to determine the relative energy and orientation of the various alpha particles. Groups carrying out these measurements in the past, however, have had to contend with background effects caused by two particles of a similar energy hitting the same silicon detector.

Careful detection

The latest research overcomes this problem through a carefully chosen arrangement of detectors that ensure each alpha particle from the break-up of carbon-12 hits a separate piece of silicon. The two groups involved did so using different nuclear reactions – Robin Smith and colleagues at the University of Birmingham in the UK fired helium nuclei at a carbon target while Daniele Dell’Aquila of the University of Naples and team employed a reaction involving nitrogen and deuterium. Both arrived at very similar results: establishing, respectively, that the single-step process occurs no more than 0.047%/0.043% of the time. That represents a more than fourfold improvement in sensitivity compared with the previous best result, which put an upper limit on the single-step occurrence of 0.2%.

“It is certainly encouraging that they get basically the same result,” says David Jenkins of the University of York in the UK. “It refutes some recent (presumably mistaken) experimental work which identified a substantial branch for the three-alpha decay mode.”

Smith and his colleagues argue that the new results begin to put pressure on the idea that the Hoyle state consists entirely of alpha-particle clusters. He notes that theorists who carried out full three-body calculations of the Hoyle state decay in 2014 arrived at a value for the single-step process of 0.1%, while a model assuming that the three alpha particles condense into the nuclear equivalent of a Bose–Einstein condensate arrives at a similar value: around 0.06%. “We believe that the data provide good evidence that the alpha-condensate interpretation of the Hoyle state is problematic,” he says.

Dell’Aquila adds that the improved constraints on the one-step decay process can help refine our understanding of how elements are made in stars. He points out that in stars that burn helium at low temperatures, the existence of the one-step process significantly affects production of carbon-12.

Detection limit

Both groups say that they have pushed the sensitivity of conventional solid-state detectors to their limit, and that further improvement will require developing gas target detectors. Smith explains that the new detectors would image the paths of the alpha particles as they separate during the decay, so allowing “the unambiguous identification of a direct decay based on their relative directions”.

Both sets of results are described in separate papers in Physical Review Letters.

Optimizing the cryopreservation of cells

Due to time constraints, medical research experiments using cells may occur weeks, months or years apart from their extraction. It would be unreasonable to keep growing cells for that duration and instead cell growth is paused. This process, called cryopreservation, freezes tissues to levels where biological activity is effectively paused, allowing for indefinite storage of cells.

Freezing of cells is commonly performed in a liquid suspension with a cryoprotectant, but advances in tissue engineering have enabled the growth of cells in 3D scaffolds called hydrogels, from which cells are difficult to extract and cryopreserve. This has led to the freezing of cells within a hydrogel. Now, researchers from the University of Trento in Italy have assessed the effects of a range of cryoprotectants on cells within a hydrogel (Tissue Engineering Part C: Methods doi: 10.1089/ten.TEC.2017.0258).

The group found that cells that were encapsulated within the alginate hydrogel had similar levels of cell death to the control (cells within a liquid suspension), but also had higher levels of metabolic and proliferative capabilities when thawed and cultured.

The protocol involved mixing different cryoprotectants – dimethyl sulfoxide (DMSO), glycerol and trehalose – at different concentrations, with cell-laden alginate hydrogels and the control cells. The effect of the different cryoprotectants on the cells within the hydrogel post-thawing had not been investigated previously.

The freezing and thawing of cells is potentially lethal, introducing a lot of stress when not performed under appropriate conditions. Adding cryoprotectants before freezing is crucial as they reduce the freezing temperature of the cell solution, thereby avoiding the damaging formation of intracellular water crystals that can puncture and kill cells. Issues arise when preparing cell solutions for freezing and after thawing as the concentrations used for cryoprotection are toxic to cells when not frozen.

In this study, the researchers identified the best cryoprotectant and optimal conditions for use with the alginate hydrogel: DMSO and glycerol provide the best response in cells after thawing.

Cells used within the lab have a limited lifespan, with only certain amounts of cell doublings before they succumb to senescence or a loss of morphology, highlighting the necessity for safe storage of cells during an extended time-frame. The authors suggest that their approach could be used to evaluate the effects of a range of cryopreservation methods. In addition, they believe the cell encapsulation and cryopreservation techniques could be introduced in tissue engineering as a method for cell banking, cell expansion and for bottom-up development of tissues.

Illuminating a radio icon

2017 marks a couple of important anniversaries for the astrophysics community at Jodrell Bank. First, it is the 60th anniversary of the first light of the Lovell Telescope, which was the largest steerable dish telescope in the world (it is still the third largest). Second, it is the 50th anniversary of the first detection of pulsars being made by Dame Jocelyn Bell Burnell who was then a PhD student at the University of Cambridge.

The telescope takes its name from Sir Bernard Lovell who founded the Jodrell Bank Observatory in 1945. Over the decades, this astrophysics hub has been a valuable tool for studying various astrophysical objects and it even played a role in tracking events during the Space Race. Today it is the HQ of the Square Kilometre Array, a distributed telescope array that promises to usher in a new era in radio astronomy.

In recent years, Jodrell Bank has also developed a significant science-outreach programme, including the Jodrell Bank Discovery Centre, which opened in 2011 and now attracts thousands of visitors every year. The Blue Dot Festival is an extension of this and has been graced by music acts including Elbow, Sigur Rós and the Flaming Lips. The festival also features the winning work of the COSMOS art–science project, a collaboration between Jodrell Bank, Blue Dot Festival and the arts organization Abandon Normal Devices.

This year’s winning artist was Daito Manabe from Tokyo, an audiovisual artist whose specialisms include the visualization of data. Among his previous collaborators is the Icelandic popstar Björk. In this latest project, Manabe collaborated with astrophysicists at Jodrell Bank to transform pulsar data (live and archival) into sounds and images projected onto the Lovell dish. In this podcast, Glester experiences the event and discusses the project with Manabe. You can also hear what the rest of the festival-goers thought of this otherworldly experience.

Do topological waves occur in the oceans?

Topological water currents could exist in Earth’s oceans, according to physicists in the France and the US. The team has made a connection between the physics that gives topological insulators their unusual properties and Kelvin and Yanai waves – which exist in the oceans and atmosphere and are involved in regular variations of the Earth’s climate system.

Topological insulators are insulating materials that support electrical currents on their surfaces. They have captivated the imaginations of condensed-matter physicists because they offer ways of creating materials with new and potentially useful properties.

In topological insulators, the topology of the electron bands causes the electrons of opposite spin to move in opposite directions – resulting in circular motions. The effect is closely related to the so-called quantum Hall effect, which occurs in 2D conducting materials such as thin films in the presence of a magnetic field. The resulting circular motion permits no net flow of current through the material – except at the edges, where the circular orbits are truncated so that electrons can move around the surface in a series of arcs.

Topological protection

Because these motions arise from purely topological features of the electronic structure, they cannot easily be disrupted by defects: they are said to be “topologically protected”. Topological insulators are one of the most studied examples of so-called quantum materials, whose electronic or magnetic behaviour is strongly governed by quantum-mechanical effects. Some of these materials might find applications in, for example, quantum computing – but their chief attraction for physicists is that they unify a whole bunch of previously disparate concepts.

That unity reaches beyond materials science, according to Pierre Delplace and Antoine Venaille of the École Normal Supérieure de Lyon in France, working with Brad Marston of Brown University in Rhode Island. They say that two types of long-recognized wavelike flow in the atmosphere and oceans, called Kelvin and Yanai waves, also have a topological origin, which is mathematically analogous to the surface-conducting states of topological insulators.

This equivalence shows up in the mathematics of the problem, the researchers say. In condensed-matter physics, electron states are described by the wavelike Schrödinger equation. Orbits are created by the fact that an applied magnetic field breaks time-reversal symmetry: the solutions to the Schrödinger equation change when time t is replaced by –t. The surface electron flows then arise from a breakdown of translational symmetry that occurs at the surface.

Coriolis force

All these features, say Delplace and colleagues, are mimicked in the wave equations for flows in the atmosphere and oceans, where the Coriolis force – an effective force due to the Earth’s rotation that displaces flows to the right and left in the northern and southern hemispheres, respectively – plays the role of a magnetic field. These equations produce waves trapped close to the equator, always compelled to travel eastward, which are known as equatorial Kelvin and mixed Rossby-gravity (Yanai) waves.

Other such waves also exist, such as the long-period pure Rossby waves, but these are not “topologically protected” in the same way. Kelvin and Rossby waves can act as precursors to the quasi-periodic ocean-atmosphere oscillation called the El Niño Southern Oscillation, which produces significant climatic effects such as drought or high rainfall in some equatorial regions.

“A bit mysterious”

“The [theory of] equatorial waves was worked out in the 1960s but their topological origin went unnoticed until now,” says Marston. Instead, he says, the waves were seen simply as solutions to the shallow water equations near the equator – “but still they seemed a bit mysterious”.

The researchers spotted the connection, says Marston, through “intuition about the underlying physics” – particularly in the way that both magnetic fields and planetary rotation break time-reversal symmetry. It’s obvious once you’re attuned to the concepts of topological insulators, he says – all the recent candidates for a condensed-matter physics position at Brown immediately saw the link when shown the equations for Kelvin waves in an old geophysics textbook.

The resilience of such equatorial waves has been understood as a consequence of the mismatch in the dispersion (wavelength-frequency) relation between the equatorial waves, and other waves, Marston explains – “but now we can see that the mismatch is guaranteed by topology.” He adds: “I think the real consequences of topology remain to be discovered. Also there may be new types of waves with topological origin waiting to be found.”

Fresh perspective

Atmospheric scientist Isaac Held of the US National Oceanographic and Aeronautical Administration’s Geophysical Fluid Dynamics Laboratory at Princeton University agrees that this is a new interpretation of these equatorial waves. He thinks that the fresh perspective might help to understand why they are so robust, for example in the face of random noise in atmospheric or oceanic flows.

“This is really exciting work”, says physicist Sebastian Huber of the Swiss Federal Institute of Technology (ETH) in Zürich, who has previously demonstrated a mechanical analogue of topological insulators using an array of small pendulums. He says it reflects the “topological mindset” that physicists have acquired in recent years. “Many people tell me that our own mechanical set-up opened their eyes to the fact that the [vibration] band topology is not related to quantum mechanics but rather wave physics in general.”

US nuclear-physics facility completes $338m upgrade

The Jefferson National Accelerator Facility has completed a $338m upgrade to the lab’s Continuous Electron Beam Accelerator Facility (CEBAF), making it the world’s most powerful microscope for studying the nucleus of the atom. CEBAF will now be able to produce a continuous beam of elections with an energy of 12 GeV – more than double its previous incarnation – and with a much greater intensity. The Department of Energy, which operates the lab, approved the completion of the upgrade late last month.

CEBAF is a 1500 m-long oval-shaped track built 7.5 m underground. It accelerates electrons using superconducting radio-frequency modules and then smashes the beam into experimental targets. Huge detectors then collect the fragments. By then studying the momentum and direction of the scattered fragments, physicists can probe the inner structure of protons and neutrons to test the theory of quantum chromodynamics and the Standard Model of particle physics.

Exotic meson spectroscopy

As well as improvements to the accelerator, the upgrade also involves the construction of a new experimental hall, taking the number to four. Known as “Hall D”, it will use the full 12 GeV to perform exotic meson spectroscopy enabling researchers to map the spectrum of exotic mesons that could provide clues about why quarks – the fundamental building blocks of matter – do not exist on their own. The three remaining halls, which will use 11 GeV beams, have also been upgraded and will let researchers explore the quark–gluon structure of hadrons.

“The team is thrilled to reach the successful conclusion of this complex project,” says Allison Lung at the Jefferson Lab, who is chief planning officer and director of the upgrade project. “This moment is a culmination of the dedication and the hard work of hundreds of Jefferson lab staff members, users and subcontractors.”

David Ireland, head of nuclear physics at the University of Glasgow in the UK, who regularly uses the facility, says that CEBAF will be integral to determining where the proton gets its mass and spin from. “Previously we have only been able to touch on the properties of the proton and neutron,” Ireland told Physics World. “The upgrade to 12 GeV will now allow us to look at scales smaller than the proton and in a lot further detail.”

Important milestone

That view is backed up by nuclear physicist Daniel Watts from Edinburgh University in the UK, who is also a regular user of the facility. “The completion of the upgrade is a big milestone for the international nuclear-physics community,” he says.

Planning for CEBAF’s upgrade began in 2004 with a design completed in 2007. Construction started in 2008 with initial operations beginning in 2014.

Scanning tunnelling microscope creates all-graphene p–n junctions

Using the electric field of a scanning tunnelling microscope (STM), scientists in the US, Europe and Japan have created nanoscale circular p-n junctions in graphene. By controlling the voltage between the probe tip and the substrate, the researchers altered the size of the junction, forming deep, narrow electron traps, or broad optical wave guides. The work overcomes one of the main challenges in exploiting graphene’s unique electronic structure, and could be used to create all-in-one, multifunctional optoelectronic devices with unprecedented capabilities.

p–n junctions are formed at the interface between two materials, or between differently doped regions of the same material. The positive, p-doped side corresponds to an excess of holes, and the negative, n-doped side an excess of electrons. Individually, both sides of the junction are good conductors, but when brought together an external bias is required for a current to flow. By reversing this bias, the flow can be stopped. This principle underlies the binary “on/off” states on which digital logic commands are built.

One of the fundamental difficulties in using such structures in graphene-based electronics arises from the linear (Dirac-like) energy dispersion of the material’s electrons. This means that electrons can travel through potential barriers of infinite height, making the “off” state very hard to achieve. Now, Eva Andrei at Rutgers University in the US, and colleagues in the USBelgium and Japan, have found a way to bring these electrons under control.

Andrei and her team created nanoscale circular p–n junctions using the electric field of a nearby STM tip. Positively biasing the tip attracts negative charges to a circular region in the graphene directly below. The interface between this n-doped region and the p-doped graphene around it forms the p–n junction.

Doping levels in each of these regions can be tuned easily by changing the voltage applied to the tip or the substrate. The size of the junction can be controlled to some extent by the doping, but producing a junction of the smallest lateral extent requires an atomically sharp STM tip. The researchers achieved this by pressing the probe tip repeatedly into gold before slowly retracting it. Each iteration caused gold atoms to be dragged out at the apex of the tip, eventually forming a narrow gold nanorod on the end.

The team discovered that when the circular junctions are very small, they act as deep, narrow potential wells that trap the graphene’s Dirac electrons. When a smaller voltage is applied to the underlying silicon, the potential well becomes broader and can no longer trap electrons. Instead it becomes host to a set of electronic modes that can be used as optical guides.

Creating these junctions near a reflecting plane – a gold contact in this case – can enhance their optical guiding capabilities. Electronic modes from the junction travel outwards like ripples in a pond and reflect from the contact. They form a standing wave pattern that spans an area much larger than the lone p–n junction.

The p–n junctions created by Andrei and colleagues provide a mechanism to confine electrons in graphene and, with simple tuning, can act as optical guides, providing a bridge between electronics and optics. Now that the dynamics of p–n junction formation have been described, more sophisticated architectures can be achieved by depositing nanowire or nanodot arrays on gated graphene samples, avoiding the complexity and cost of using an STM probe. The technique promises unique and interesting future applications.

Full details of the research are reported in Nature Nanotechnology.

A tenner in space, why just 0.3% of LIGO bagged the Nobel, sounding-off in Havana

By Hamish Johnston and Matin Durrani

Primary school children in Scotland have celebrated the launch of a new £10 note by launching it into space. Well, sort of. The Royal Bank of Scotland note was actually sent aloft on a high-altitude balloon with a camera to capture the event for posterity.

If you know your astronomers, you will recognize Mary Sommerville on the tenner. She was active in the early 19th century and famously predicted the existence of Neptune by its influence on the orbit of Uranus. She and Caroline Herschel were the first women to be members of the Royal Astronomical Society and she also wrote the bestselling science book On the Connexion of the Physical Sciences.

There is much more about Sommerville in our podcast “Mary, Queen of Scottish banknotes“.

“The Nobel Prize for Physics went to three men, but the paper describing their discovery lists 1000 authors. What gives?” So asks Nicole Kobie in Wired. Her article delves into the machinations that ensured that the right 0.3% of the LIGO collaboration are going to Stockholm. See “You can’t give a Nobel prize to a thousand people. Here’s why“.

There was a fascinating tale by Carl Zimmer in the New York Times this week about the curious medical symptoms that recently struck nearly two dozen diplomats at the American embassy in Havana, Cuba. Ranging from hearing loss to cognitive difficulties, the problems were so serious that the US State Department concluded that the staff were the victims of a stealth attack. It also withdrew non-essential personnel from Havana, and even advised Americans not to visit. But according to Zimmer’s report, US government officials have suggested anonymously that the diplomats may have been assaulted with some sort of sonic weapon, possibly in their homes.

Well, we know that the Pentagon has funded the development of loudspeakers to ward off pirates with long-range blasts of sound, but presumably if sound had been used in Havana, we’d have heard it. In that case, could infrasound, which lies below the frequency of human hearing, or ultrasound, which lies above it, have been deployed? Both are unlikely, according to acoustical physicists and others quoted by Zimmer, but ultrasound is the less implausible option of the two. For example, Steven Garrett, who taught acoustics at Penn State University before retiring last year, says he used to demonstrate ultrasound beams to his students. Garrett says he’d get nauseous and develop a headache; eventually he took to wearing earmuffs and earplugs. It’s “anecdata” for sure, but Zimmer’s article makes intriguing reading.

Light polarization modulated rapidly by gold nanorods

A new and quicker way of modulating the polarization of light pulses has been unveiled by researchers in the UK. The technique uses a second laser pulse to control polarization and could be used in a range of practical applications including pharmaceutical development.

Devices that modulate the polarization of an optical signal play important roles in a range of technologies. The data capacity of an optical fibre can be increased, for example, by encoding information into different polarization states of light that are then transmitted simultaneously. Today, the speed that this multiplexing can be achieved is restricted by how quickly the polarization can be modulated.

Most active modulators available today are hybrid optical and electronic devices, which means that they are inherently slow. Some ultrathin optical metasurfaces can significantly alter polarization, but these have been passive devices that are not suitable for making an active polarization modulator.

Gold nanorods

In the new research, Luke Nicholls and colleagues at King’s College London used a metasurface comprising a grid of 400 nm-long gold nanorods in an aluminium-oxide matrix. This structure supports both an ordinary light wave, in which the electric field oscillates perpendicular to the axes of the nanorods, and an extraordinary wave, in which it oscillates along the nanorods. The material’s refractive indices for these two waves can be very different.

For the ordinary wave, the material’s refractive index remains almost unaffected by frequency. For the extraordinary wave, however, the refractive index changes not only its magnitude but even its sign. When the frequency of light drops below the natural oscillation frequency of the electrons in the gold nanorods (the effective plasma frequency), the refractive index becomes negative. This means that the metamaterial inverts the phase of the extraordinary wave: “You’re giving [the extraordinary wave] a phase delay relative to the ordinary wave,” says Nicholls, “which in turn flips the polarization state.”

It would be impracticable in telecommunications, and impossible in many sensing applications, to constantly change the signal’s frequency to achieve a different polarization. However, the team had another trick up its sleeves. “When you shine an intense laser pulse on the material, it’s absorbed by the electron gas of the material,” explains Nicholls, “and luckily for us the refractive index is closely linked with the energy of the electrons.”

Frequency shift

The effective plasma frequency can therefore be shifted to lower frequencies with a brief “control pulse” immediately before the signal pulse. The signal frequency, therefore, can be below the effective plasma frequency – meaning its polarization is flipped – under normal circumstances, but above the effective plasma frequency – meaning its polarization is unaffected – if the metamaterial has just been hit by the control pulse. The team also showed that one pulse can perform both control and signal functions: the leading edge of the pulse controls how the polarization of the rest of the pulse is affected, with a more intense pulse being rotated more strongly.

“More generally there is a resonance in one component of the transmitted field (the extraordinary wave) and no resonance for the perpendicular component (ordinary wave),” explains Nicholls. “Like any resonance there is a phase shift between frequencies below and above the resonance. The output polarization of a signal beam at a set wavelength is thus controlled by changing the relative position of the resonance to the signal pulse with the control pulse and inducing a phase shift in the extraordinary component of the transmitted signal pulse.”

Relaxation limit

The researchers modulated wave polarization at frequencies up to 300 GHz – the fastest achievable with current technology is 40 GHz. It may be possible to go even faster: “At the moment, our speed is limited by the relaxation of the electrons back to the ground state,” says Nicholls, “We are looking at ways of understanding this process more and speeding it up.”

In principle, rotating the polarization of light faster could increase the number of signals squeezed into a single optical fibre. However, the researchers believe the technique could find other uses. It is often crucial that pharmaceutical molecules are in left-handed forms because right-handed forms can be ineffective or even toxic. The proportion of molecules in left- and right-handed forms can be measured by analysing how a solution affects different polarizations of light, a technique called polarimetry.

“If they can switch the polarization on a timescale on which the chemical reactions are happening,” says Nicholls, “then hopefully drug developers can understand what processes are leading to these bad orientations and potentially cut them out.”

Significant improvement

Andrea Alù of The University of Texas at Austin, who was not involved in the research, describes it as “a significant improvement” although “not necessarily surprising”: he notes that a recent paper from US researchers in the same journal describes similar effects with reflected waves. Nicholls says that this paper was submitted after that of the King’s group’s paper.

Alù points out that the effective plasma frequency resonance is a maximum in energy dissipation and therefore signal loss. This could make the process untenable in telecommunications, but he believes the set-up is promising elsewhere: “If you want to do ultrafast chemical sensing that’s not possible at present,” he says, “loss is not necessarily the first metric you look at.”

The research is described in Nature Photonics.

Copyright © 2026 by IOP Publishing Ltd and individual contributors