Skip to main content

Could ammonia be the secret to shipping carbon-free?

While writing my February column about environmental activist Greta Thunberg sailing rather than flying across the Atlantic, I came across information that shocked me. According to the Economist (11 March 2017), the emissions of nitrogen and sulphur oxides from 15 of the world’s largest ships match those from all the cars on the planet. Indeed, if the shipping industry were a country, it would be ranked between Germany and Japan as the world’s sixth-largest emitter of carbon dioxide.

Shipping is the lifeblood of the global economy and 90% of trade is seaborne. More than 90,000 ships crossed the oceans in 2018, burning two billion barrels of the dirtiest fuel oil. Ships belch out pollutants into the air, principally sulphur dioxide, nitrogen oxides and particulate matter. These nasties have been steadily rising and endangering human health, especially along key shipping routes. They also create 2–3% of the world’s total emissions of carbon dioxide and other greenhouse gases.

Shipping – like aviation – isn’t covered by the Paris Agreement on climate change.

But because of its international nature, shipping – like aviation – isn’t covered by the Paris Agreement on climate change, which seeks to limit the global temperature rise to 2 °C this century by reducing emissions. Instead, it is up to the International Maritime Organization (IMO) to negotiate cuts in emissions from the shipping industry. The IMO wants to reduce emissions from the sector by 50% by 2050, but it is facing increased demand from population and economic growth. If left unchecked, maritime emissions will rise six-fold by that date.

The worst fuel?

Most ships today burn bunker fuel. It’s a “heavy” fuel oil – mostly what’s left over after crude oil is refined. Literally, it’s the dregs. It’s also toxic when burned, and currently contains 3500 times as much sulphur as diesel fuels do.

The IMO has already adopted mandatory measures to cut greenhouse-gas emissions from international shipping. As of last year, all ships over 5000 tonnes – those that emit 85% of all maritime greenhouse-gas emissions – have been required to collect fuel-oil consumption data. And from this year, the IMO has capped sulphur emissions by banning the sale of high-sulphur fuels, which must now contain no more than 0.5% sulphur – down from 3.5%. Ships must also use “scrubbers” to deal with exhaust gases.

More than $12bn (£9.7bn) has so far been spent on fitting these exhaust-gas scrubbers to ships. These devices process the sulphur, creating a liquid by-product that contains pollutants, heavy metals and carcinogens that can harm sea-life. A total of 3,756 ships had scrubbers installed by 2019, which sounds great until you realize that about four-fifths of vessels – 3014 to be precise – have “open-loop” scrubbers that simply pump the toxic liquid by-products straight back into the sea. Worse still, scrubbers increase fuel consumption by about 2%, adding to carbon-dioxide emissions.

It is easy for the IMO to seem like the bad guys, but it’s made up of 174 individual national representatives, each fighting their own economic corner. In my view, the IMO – a UN body – is doing its best. Maersk – the world’s largest container-shipping company – estimates that the IMO’s 2020 rulings will result in a $2bn rise in its annual fuel costs to switch from high-sulphur fuel to marine gas-oil, which is almost 50% more expensive. The company itself has invested $1bn in cleaner technology development over the last four years.

Ammonia to the rescue?

One way of making shipping greener could lie with ammonia (NH3). A pungent gas in its natural form, ammonia is widely used by farmers as a fertilizer and might seem an odd saviour for the shipping industry, especially as the manufacturing process is far from green. Ammonia is made by reacting nitrogen at high temperatures and pressures with hydrogen obtained from methane. The latter is an energy-hungry process, known as “steam methane reforming”, that accounts for 1.8% of all carbon-dioxide emissions.

However, according to a new report from the Royal Society, ammonia has a vital role as a zero-carbon fuel and energy store. It says that the maritime industry is likely to be an early adopter of “green ammonia” – a fuel made by mixing nitrogen from the air with hydrogen obtained by electrolysing water using electricity from sustainable sources. Green ammonia could either be burnt in a ship’s engine or used in a fuel cell to produce electricity. Water and nitrogen are its only by-products and it has an energy content of 3 kWh per litre of fuel compared with just 2 kWh/l for liquid hydrogen.

According to a new report from the Royal Society, ammonia has a vital role as a zero-carbon fuel and energy store.

Although conventional fuel oil has a higher energy content of 10 kWh/l, ammonia is easily stored in bulk as a liquid at modest pressures (10–15 bar) or refrigerated to –33 °C – making it an ideal chemical store for renewable energy. Ammonia also benefits from a fully existing near global distribution network, in which it’s stored in large refrigerated tanks and transported around the world by pipes, road tankers and ships.

Earlier this year, the shipping body Lloyds Register along with engine maker MAN, Samsung Heavy Industries and the MISC Group announced plans to develop ammonia-fuelled tankers to deliver viable deep-sea zero-emission vessels by 2030. The UK’s Science and Technology Facilities Council has also been working on green-ammonia technology with three firms – Siemens, Engie and Ecuity – for some time. Indeed, Bill David – a chemist from the University of Oxford – told me at a recent business event that trucks and cars might one day use this fuel too.

Sure, ammonia costs more than conventional fuels for ships. But if the power to create the hydrogen and ammonia can come from carbon-free sources, such as renewables or nuclear, then surely green ammonia will be the future of carbon-free shipping and other sectors too.

Physics in the pandemic: ‘Meeting face-to-face is a luxury we can’t afford right now’

A man leaning over a laptop computer in front of a telescope pointing up at the sky

My research work involves continuously monitoring active galactic nuclei (AGN) in the optical wavelength range through different telescopes. I spend most of my time at ARIES, and the remainder of my time at the institute’s observatory in Devasthal, which is located 70 km further into the Himalayas. Some of our observations are also done remotely with robotic telescopes in other parts of the world and with space-based telescopes.

My work is broadly divided into two parts: performing observations for a few nights every month at Devasthal, and reducing, analysing and modelling data obtained from this and other telescopes to get scientific outputs. On nights when we are scheduled to observe, we have to be present at the observatory along with the technical staff stationed there to help us run everything smoothly. When not observing, we work in our institute’s office cubicles. Most often our working hours extend into the night, with occasional tea and coffee breaks. In terms of group activities, we have daily discussions over arXiv papers within our research groups, weekly discussions with other research fellows in our science club and occasional seminars during the afternoon. We have some sports activities and we usually play volleyball, badminton and table-tennis during the evening.

The world’s biggest lockdown

Owing to safety concerns, ARIES asked us to leave a few days before 25 March, when India’s central government began to enforce the world’s biggest lockdown – effectively confining 1.3 billion people to their homes. Now, there are only a few essential staff on duty at the institute and the observatory. The technical operators at the observatory have been remarkable, keeping the telescope up and running even in these difficult times. Meanwhile, I returned to my home in New Delhi and am currently working from there.

Working remotely is a familiar thing for people involved in astronomy. Whether it’s using data from the robotic telescopes, filing online observation requests or holding meetings with our collaborators over Skype, we regularly work remotely even in normal times. And of course, for space-based telescopes, working remotely is the only possibility we have! With that in mind, I expected my experience of this pandemic to be more of a “change-of-workplace” rather than “work-coming-to-a-halt”. Working from home didn’t seem challenging at first sight, especially for those of us already equipped with the essentials – namely a computer, notepads and a good Internet connection.

But what I have discovered is that it is very difficult to work as usual from home. I read somewhere that the experience is like inviting your family to the workplace during office hours. Even the act of getting up and then walking a few feet to your desk doesn’t feel encouraging most of the time. The atmosphere doesn’t help either, with everyone anxious about what happens next. There are rumours galore even in the national capital – sometimes about food shortages leading to panic buying, sometimes about free buses for migrants leading to some crazily large gatherings of people trying to get home.

A group photo showing five astronomers standing in front of a telescope

Having said that, in these uncertain times it’s better to cherish what we have and not fall into the trap of negativity. I plan my day in the morning and make a list of things to do. My usual workday now involves reading relevant papers, working on my current project for a few hours, learning techniques online to solve some problems, and talking over the phone with friends. This pandemic has given us a good opportunity to catch up with old friends and family members, which eventually reduces the isolation we feel. But of course, working in a group, surrounded by like-minded people who share the same feelings as we go through the little ups and downs of research life – this is a component that suddenly disappeared. We try to make up for that by scheduling video calls occasionally, but the feeling of sitting together and having a chat can’t be replaced at all. Meeting face-to-face is a luxury we can’t afford right now.

Adjusting to a new normal

Initially, India’s national lockdown was supposed to end on 14 April, but after looking at the global and national trends, the government has extended it for another few weeks, to 3 May. I have prepared myself accordingly. Our group meetings have shifted to video conference apps like Skype or Zoom. Analysis work for the data I already have continues with occasional inputs from my supervisor. I plan to get a draft ready by the end of this month. On the observational front, I am able to submit observation requests for the robotic telescopes online, while at the Devasthal observatory my observations are performed by the technical operators with a request over a phone call. So far it has worked out, and I am hopeful it will continue this way until the crisis subsides.

Because we miss the company of friends and colleagues at ARIES, we decided to meet twice a week over Skype for an hour. The director of our institute has initiated a seminar series over Zoom that has been highly successful, with more than 40 people participating every day. I have also signed up to present a talk on this platform. Even though we miss being with our community, we can at least connect with people if we want to, thanks to modern technology. No-one can imagine what our life would have been like had the disease struck in the pre-Internet era.

Even though this crisis has reduced my usual work hours, my ability to work and communicate has not been lost. Luckily, my research doesn’t involve a lab that has been shut; although it involves a few telescopes, these have been functioning so far, albeit with reduced technical staff. The most difficult question I face these days is from my mother. Every morning while having tea, she has been asking, “When will this situation improve?” I have not been able to answer the question convincingly and hence it persists.

Predictions and hopes

Deep down, no-one among us has a convincing answer, as this is a crisis we have never experienced. It may extend for months, and at the moment there is no end in sight. Even if the virus goes away, the effects it will leave behind will be profound. Almost all the national and international conferences have been cancelled, including some as far off as November, which hampers our opportunities to interact with the research community. In the near future, work on ground as well as space-based telescopes will definitely slow down, and some projects may not even get started.

One final note. We have been divided along the lines of religion, race, caste and gender for a very long time. But for the time being, the talk of division along these lines has been replaced by talk of doctors, equipment and medical research. If we are seeking a positive message to come out of this crisis, I think it is that we should be united from now on, however Utopian it may sound. Leaders have been talking about the world being different after this crisis is over. I hope one aspect of this difference is that we can rise above the petty divisions among us and bring togetherness to all of humanity.

A new way of analysing ‘horizontal visibility graphs’

Ryan Flanagan, Vincenzo Nicosia and Lucas Lacasa from Queen Mary University of London have developed a new way of analysing “horizontal visibility graphs” – a time series of data points that can be used, for example, in medicine to distinguish patients who have certain diseases from those who do not.

The research is reported in full in Journal of Physics A, published by IOP Publishing – which also publishes Physics World.

What was your motivation for performing this research?

By means of the so-called horizontal visibility algorithm, a sequence of data points (a time series) can be transformed into a horizontal visibility graph (HVG). After doing that, tools of network science and graph theory can be used to describe the time series combinatorically, opening new avenues for characterization.

A typical application of this method often tackled in the literature is that of classifying and distinguishing patients with a certain pathology from healthy controls, by using the properties of HVGs as features for automatic diagnosis. But applications range from physiology or neuroscience, to physics (characterizing turbulence, for example), finance, biology and more.

While we have a theoretical understanding of why certain properties of these graphs provide important information on the time series structure, for some other metrics, such a link is more difficult to grasp. Among other properties, practitioners have used the spectral (i.e. eigenvalue) properties of HVGs as features to distinguish and characterize the complexity of time series. However, to date, the amount of theoretical work on how these spectral properties behave is very scarce. This was the main motivation for this work.

What approach did you take?

We explored the spectral (eigenvalues) properties of HVGs from a theoretical angle. We constructed HVGs from different types of time series, generated by the so-called Feigenbaum scenario (periodic series with different periods, and chaotic series with different Lyapunov exponents). We then derived rigorous, analytical and numerical results on the spectral properties of these graphs, to understand how these relate to the time series structure.

We also wanted to understand to what extent the approach often employed in the literature (use of the maximal eigenvalue of the adjacency matrix as a quantifier of the time series complexity) was sensible and well-defined.

What was the most interesting finding?

The most striking result was that, at odds with what was assumed before, the maximal eigenvalue of the adjacency matrix is in general not a proper quantifier of the time series complexity. Additionally, we found that other spectral features of the HVG could do a better job.

Besides this, this paper is the first one that attempts to make a rigorous analysis of the spectral properties of these kind of graphs. It opens up a line of research on the spectral properties of this family of graphs.

Why is this research significant?

We hope that this research will catch the interest of different groups of scientists. From an applied point of view, our results show that spectral properties of HVGs should be used cautiously if the aim is to characterize the complexity of a time series. From a theoretical point of view, we hope that spectral graph theorists and algebraic combinatorialists will be interested in a range of open problems that we flag. While HVGs have been mostly used in applications, their mathematical foundation is mostly in its infancy, and there is room for a lot of interesting work to be done.

What do you plan to do next?

There are several open problems. From a mathematical point of view, we aim to fully analytically solve the spectrum of HVGs (at least in the context of current analysis). In the paper, we propose several possible avenues for doing that. Particularly intriguing is to understand the apparent fractal shape of this spectrum (as seen in the image above).

The full results of the study are reported in Journal of Physics A: Mathematical and Theoretical.

Hubble’s best shots: The Horsehead Nebula

Selecting a definitive list of the best images taken by the Hubble Space Telescope is an impossible task. The orbiting observatory has snapped so many superb shots of deep space that everyone has their own favourites based on aesthetic appeal and scientific importance. Nevertheless, in this series I’ll explore 10 images that ought to be contenders.

Number 10 on the list is this image of the Horsehead Nebula. Hubble saw this iconic dark nebula in a very different light in the autumn of 2012. The telescope was always famed for its prowess in visible and ultraviolet light, but with the addition of the Wide Field Camera 3 during its final servicing mission in 2009, Hubble also gained capabilities in the near-infrared. This allowed it to see through some of the nebula’s obscuring dust – which normally appears black against the red glow of the emission nebula behind it – to reveal the delicate, wispy structures that make up the nebula, which acts as a shroud, concealing the birthplace of stars within.

Quantum computing and qubit scale-up applications with Proteox from Oxford Instruments

Exploring a materials research hub in the heart of Spain

This short film takes you inside one of Spain’s premier materials science research facilities – the Materials Science Institute of Madrid (ICMM). Managed by Spain’s scientific research council, the institute deals with both fundamental and applied research with a strong focus on areas that could lead to real-world applications. In the video – recorded just before the nationwide lockdown – you meet three ICMM researchers with diverse interests.

María Concepción Serrano López-Terradas is developing graphene-based implants that could be used to treat patients with spinal cord injuries. Felipe Gándara is interested in metal-organic frameworks (MOFs) and covalent organic frameworks (COFs) that could be used for gas storage. Finally, Andrés Castellanos-Gómez is designing methods for straintronics – an emerging field of electronics devices whose properties can be tuned by mechanical deformation.

Within the next couple of weeks on this website, we will also be sharing extended video interviews with all three researchers looking in more detail at their work. In the meantime, you can also take a look at the Physics World Nanotechnology Briefing, published in April 2020. This free-to-read collection celebrates how nanotechnology is playing an increasingly important role in applications as diverse as medicine, fire safety and quantum information.

Artificial lightning strikes encourage growth of shiitake mushrooms

Japanese researchers are closing in on understanding why electrical storms have a positive influence on the growth of some fungi. In a series of experiments, Koichi Takaki at Iwate University and colleagues showed that artificial lightning strikes do not have to directly strike shiitake mushroom cultivation beds to promote growth. Now they are developing technology to use electric stimulation in the production of the mushrooms, which are popular in many east Asian cuisines.

Bizarre as it may seem, atmospheric electricity has long been known to boost the growth of living things, including plants, insects and rats. In 1775, the priest and physicist Giovanni Battista Beccaria of the University of Turin reported, “it appears manifest that nature makes extensive use of atmospheric electricity for promoting vegetation”.

David Graves at the University of California at Berkeley, an expert on plasmas in food and agriculture who was not involved in this mushroom study, said “Historically, most people who looked at it [atmospheric electricity] systematically found some effect, but it can sometimes be hard to reproduce, so it’s still not by any means fully accepted in the community and there’s little understanding of the mechanism.” However, at recent plasma conferences, Graves was favourably impressed by the results presented by Takaki.

Fruiting bodies

Takaki explains, “We use high voltage electric shock as stimulation to change the mushroom growth state from vegetative to reproductive growth of fruiting bodies”. Takaki, is an expert on discharge plasma and high voltage engineering and aims to improve the cultivation shiitake mushrooms in countries that suffer from low yields.

Shiitake mushrooms are grown in hardwood logs in a process that takes one year. First, branching vegetative filaments called hyphae are grown in the logs, which are kept in beds. Farmers then submerge the logs in water for 1-2 days and then beat the logs mechanically. When this is done skilfully, it disrupts the interlinking hyphae, moving the shiitake into its reproductive phase of growth that produces the desirable mushroom caps.

In Takaki’s previous studies, yield increases were achieved by running a direct current through a shiitake mushroom log. But Takaki still wondered – why do natural electric storms indirectly influenced the growth of mushrooms located miles away from the lightning strikes?

Other physical events

He hypothesized that it was not purely high-voltage electricity stimulating mushroom growth, but that other physical events must be triggered and ripple out into the surrounding environment. To test this, Takaki’s team investigated the effects of more natural, indirect strikes on growth.

They took logs ready for stimulation, performed a 24 h submersion in water, and then arranged the logs 3 m offset from the lower and upper electrodes of an impulse voltage generator. In an electrical storm a cloud generates 3-4 strikes on average, so the team programmed the same number of sparks to sequentially discharge between electrodes.

That was day one. The team harvested mushroom caps more than 50 mm in diameter on days 9, 11 and 13, recording fruiting body number and size. They collected approximately twice as many mushrooms on logs exposed to lightning strikes 3 m away, compared to mushrooms produced by the control set of logs sitting 12 m away within the same facility.

A third set of logs were exposed to daily sets of lightning strikes for a week and they produced an even higher yield than those only exposed to one set of strikes.

A shock to the system

“The large current from a lightning strike causes temperature to quickly rise from room temperature to about 10,000 °C,” explains Takaki. “This quick rise in temperature rapidly increases the volume of the air producing a shock wave that propagates to, and then vibrates inside the log. This moves the hyphae inside the log, breaking the strands and stimulating fruiting body formation.”

Current mechanical stimulation methods create pressure waves only partially infiltrating the logs. “Lightning produces a shock wave that propagates homogenously, so it can cut many parts of the hyphae in a controllable manner,” said Takaki.

The team are working to adapt their equipment for deployment in the fungiculture industry. To find a more practical approach, Takaki’s students tried to simulate shock waves using sound from a speaker. However, from preliminary experiments, Takaki does not think the sound is loud enough to emulate the intense pressure waves generated by a lightning strike.

“We are trying to develop a cheap and compact machine so many people, not only in Japan, but also in Thailand, India and Nepal can use our technology,” said Takaki.

The research is described in Journal of Physics D: Applied Physics.

Spreading tattoo ink reveals radiation-induced necrosis

Cherenkov-excited luminescence images

Monitoring tumour progression during a course of radiation therapy can help determine whether a treatment is working or not. Such tracking is mostly anatomy-based, using weekly CT scans, for example, to measure tumour size. This approach, however, can fail to detect subtle changes at the cellular level – a task that calls for functional imaging.

Functional imaging modalities can characterize responses of the tumour microenvironment. But common techniques, such as PET or diffusion-weighted MRI, require a separate scheduled exam. Researchers from Dartmouth College’s Thayer School of Engineering have now proposed a way to image the tumour microenvironment during radiotherapy without interrupting the clinical workflow, using Cherenkov-excited luminescence imaging (CELI). They achieve this by employing CELI to track the spread of a phosphorescent tattoo ink (Phys. Med. Biol. 10.1088/1361-6560/ab7d16)

CELI works by using the Cherenkov light generated as the treatment beam travels through tissue to excite a luminescent agent – in this case, a UV-sensitive tattoo dye injected into the tumour. The researchers propose that as tumour cells break down in response to irradiation, the dye will spread. This diffusion can then be measured during radiotherapy by using a camera to image the emitted phosphorescence signals.

“In the last few years, our lab has done a great deal of radiotherapy research focused on imaging the radiation beam delivered to the patient in real time using time-gated Cherenkov imaging,” says first author Jennifer Soter, a PhD student in Brian Pogue’s research group. “We were prompted to expand on this technology further and explore additional applications in cancer treatment. In the case of tumour progression in radiotherapy, there’s really no feasible method that enables daily imaging of tumours clinically.”

In vivo investigations

Soter and colleagues used a mouse model to evaluate whether CELI can directly track subtle changes in the tumour microenvironmental in response to radiation therapy. On day zero, they injected tattoo ink into the centre of tumours in 20 mice. They then delivered a 1.4 Gy treatment fraction to all mice, using 6 MV X-ray beams from a clinical linac, and performed a baseline CELI session to measure the initial spread of the ink.

The researchers used an intensified CMOS camera to detect visible phosphorescence from the ink. The camera was time-gated with the linac pulses, such that it only recorded the delayed phosphorescence signals emitted between each radiation pulse. Immediately after the first CELI session, they delivered an additional 12 Gy dose to 15 mice, while nine untreated controls received no additional radiation. One to six days later, they delivered a second 1.4 Gy to each mouse and performed the final CELI session

By comparing images acquired immediately after injection with the final diffusive ink spread, the researchers could determine the tumour response. “Cell death via apoptosis and necrosis can lead to significant sections of the tumour decreasing, resulting in tissue clearance, which is well known from diffusion MRI,” explains Soter. “The ink is a simple label that also diffuses from within the tumour as the pressure decreases.”

In the control group, ink distributions remained constant after four days, with less than 2% diffusive spread. In treated mice, on the other hand, the ink spread reached almost 200% by day six. From two days post-injection, the team could see a significant difference in diffusive spread values between treated mice and control mice.

Following the final CELI session, the researchers euthanized the mice and imaged tumours using hyperspectral cryo-fluorescence imaging to quantify radiation-induced necrosis. Different regions of tumour – the non-perfusing necrotic core, the viable tissue and the dye – exhibited clearly different reflectance spectra.

Ink spread

The ex vivo analysis confirmed the trends seen with in vivo CELI. As the volume of necrotic core increased, the fluorescence image slices showed increasing ink diffusion. Analysing the in vivo ink spread revealed a strong correlation with the percentage of necrotic volume, but a weak correlation with total tumour volume (measured manually with callipers).

The researchers conclude that the spread of injected tattoo ink can be related to radiation-induced necrosis, independent of total tumour volume change. They propose that this in vivo imaging system offers potential to track treatment response daily, without interrupting clinical workflow.

They note that translating this approach to the clinic will require further developments. These include: increasing in vivo image resolution by moving from widefield irradiation to sheet-scanning illumination; developing a phosphorescent tattoo ink that’s safe for humans; and investigating the effects of injection site to account for inherent tumour heterogeneity.

 “Next, we plan to perform more studies of immune infiltration as related to the change in ink diffusion, as well as studies of different types of tumours with variations in stromal density and radiosensitivity,” Soter tells Physics World.

Trapped ytterbium ions could form backbone of a quantum internet, say researchers

Ions trapped nanoscale optical cavities could be used to distribute entangled quantum particles over large distances. That is the conclusion of  Jonathan Kindem and colleagues at Caltech in the US, who showed that a trapped ion of ytterbium can remain entangled with a photon for long periods of time. Furthermore, the team showed that the ion’s quantum state can be read out when manipulated by laser and microwave pulses. Their achievement could lay the foundations for a future quantum internet.

Quantum computers are becoming a reality as research labs and companies roll out nascent devices. An important next step in this quantum revolution is creating a “quantum internet” across which quantum information can be shared. The delicate nature of quantum information, however, means that it is very difficult to connect quantum computers over long distances.

Most quantum computers encode quantum bits (qubits) of information into the quantum states of matter – trapped atoms or superconducting circuits, for example.  However, the best way to transmit quantum information over long distances is to encode it into a photon of light. An important challenge is how to transfer quantum information from stationary matter-based qubits to photon-based “flying” qubits and then back again.

Attractive properties

Qubits made from solid materials interact strongly with light and therefore readily transfer quantum information to photons. However, these qubits tend to be very short-lived, which makes it difficult to use them to build practical quantum computers. Trapped atoms or ions, on the other hand, can make long-lived qubits but interact weakly with light. Rare-earth ions have properties that could make them particularly long-lived qubits, but physicists have struggled to trap them in such a way that they can be controlled and interact with light.

In their study, Kindem’s team showed that this problem could be overcome by placing a rare-earth ion of ytterbium in an optical cavity to enhance its interaction with light. To do this, they fabricated a periodic, nano-patterned 10 micron-long cavity with the ion at its centre. Light bounces back and forth many times in the cavity, greatly increasing the chance of the light interacting with the ion.

The researchers then manipulated their ion qubit using laser and microwave pulses. The result is the emission of a photon that is entangled with the qubit – a photon that itself is a flying qubit of quantum information.

More than 99% of the time, they found that this entangled photon remained inside the cavity, bouncing back and forth. This allowed the team to study the photon-ion system over a relatively long time period. Indeed, Kindem and colleagues observed that the photon and ion can  remain entangled for up to 30 ms – long enough for the photon to travel across the continental US.

Kindem’s team now hopes to scale up their experiment to enable information exchange between two real, distant qubits — demonstrating the building blocks of a realistic quantum internet. Within such a network, quantum computers in widely spaced geographical locations could share data and perform calculations together; potentially allowing extremely large computations to take place. It could also enhance the prospects for quantum cryptography by allowing networks of trusted parties to exchange information securely using entangled particles.

The research is described in Nature.

Metasurface-based contact lens corrects colour blindness

Researchers in Israel have made a new type of contact lens that can correct a form of red–green colour blindness known as deuteranomaly. By incorporating plasmonic metasurfaces into standard contact lenses, the researchers were able to restore lost colour contrast and improve colour perception by up to a factor of 10.

Humans can typically distinguish more than a million colours, but for some, colour perception is limited in certain ranges of the electromagnetic spectrum. In these individuals, the response of the light-sensitive cone photoreceptor cells at the back of the eye is attenuated when excited with a specific wavelength of light.

In deuteranomaly, for example, signals from the cells that are sensitive to green–yellow light (known as medium-type cone photoreceptors) are dulled. This means that the brain receives too many signals from longer wavelengths associated with yellow–red light. The result is that people with this form of colour blindness struggle to tell red and green wavelengths apart. Although special glasses that reduce perception of yellow–red light are available, they are bulky and uncomfortable to wear.

Artificially-engineered thin metallic films

Researchers Sharon Karepov and Tal Ellenbogen from Tel Aviv University, have now transferred metasurfaces – artificially engineered thin metallic films that can be fine-tuned to interact with light in very specific ways – onto the surface of commercially available contact lens to achieve the same filtering capability.

The metasurfaces work by exploiting the physics of plasmons, which are quasiparticles that arise when light interacts with the electrons in a metal and makes them oscillate. The shape, size and arrangement of the nanoscale structures – in this case, a 40-nm-thin film of nanosized gold ellipses – within plasmonic materials makes it possible to support plasmons at specific frequencies. By thus adjusting these structural parameters, the researchers can control which frequencies of light the material will absorb and scatter.

From flat to curved surfaces

Since metasurfaces are usually fabricated on flat surfaces, Karepov and Ellenbogen needed to develop a technique to transfer them onto the curved surface of a contact lens. Their new fabrication process opens the door for embedding these materials into other non-flat substrates as well, they say.

By testing the optical response of the metasurface at every stage of the new fabrication technique and imaging its structure, the researchers confirmed that its light manipulation properties did not change after transfer to the curved surface.

Factor of 10 improvement in colour perception

They then simulated how a wearer of their new nanostructured contact lens would perceive colour using standard tests based on Commission International de l’Eclairage (CIE) colour spaces and conventional models of human colour-sensitive photoreceptors. They found that the device could shift incorrectly recognized colours closer to the original hues and that lost visual contrast in red–green colour blindness could essentially be restored (see image). Indeed, they measured an improvement of up to factor of 10 in colour perception. An Ishihara-based colour-blindness test (the most well known colour perception test for red–green colour deficiencies) also confirmed contrast restoration.

While the new lens still needs to pass clinical-stage tests, the researchers say that manufacturers could potentially embed the metasurfaces during the moulding stage of contact lens fabrication or thermally fuse them to a rigid lens. They plan to continue improving their metasurface transfer process and test it for other applications too.

The present work is detailed in Optics Letters.

Copyright © 2025 by IOP Publishing Ltd and individual contributors