Skip to main content

Ion-clock transition could benefit quantum computing and nuclear physics

Schematic showing how the shape of ytterbium-173 nucleus affects the clock transition

An atomic transition in ytterbium-173 could be used to create an optical multi-ion clock that is both precise and stable. That is the conclusion of researchers in Germany and Thailand who have characterized a clock transition that is enhanced by the non-spherical shape of the ytterbium-173 nucleus. As well as applications in timekeeping, the transition could be used in quantum computing. Furthermore, the interplay between atomic and nuclear effects in the transition could provide insights into the physics of deformed nuclei.

The ticking of an atomic clock is defined by the frequency of the electromagnetic radiation that is absorbed and emitted by a specific transition between atomic energy levels. These clocks play crucial roles in technologies that require precision timing – such as global navigation satellite systems and communications networks. Currently, the international definition of the second is given by the frequency of caesium-based clocks, which deliver microwave time signals.

Today’s best clocks, however, work at higher optical frequencies and are therefore much more precise than microwave clocks. Indeed, at some point in the future metrologists will redefine the second in terms of an optical transition – but the international metrology community has yet to decide which transition will be used.

Broadly speaking, there are two types of optical clock. One uses an ensemble of atoms that are trapped and cooled to ultralow temperatures using lasers; the other involves a single atomic ion (or a few ions) held in an electromagnetic trap. Clocks that use one ion are extremely precise, but lack stability; whereas clocks that use many atoms are very stable, but sacrifice precision.

Optimizing performance

As a result, some physicists are developing clocks that use multiple ions with the aim of creating a clock that optimizes precision and stability.

Now, researchers at PTB and NIMT (the national metrology institutes of Germany and Thailand respectively) have characterized a clock transition in ions of ytterbium-173, and have shown that the transition could be used to create a multi-ion clock.

“This isotope has a particularly interesting transition,” explains PTB’s Tanja Mehlstäubler – who is a pioneer in the development of multi-ion clocks.

The ytterbium-173 nucleus is highly deformed with a shape that resembles a rugby ball. This deformation affects the electronic properties of the ion, which should make it much easier to use a laser to excite a specific transition that would be very useful for creating a multi-ion clock.

Stark effect

This clock transition can also be excited in ytterbium-171 and has already been used to create a single-ion clock. However, excitation in a ytterbium-171 clock requires an intense laser pulse, which creates a strong electric field that shifts the clock frequency (called the AC Stark effect). This is a particular problem for multi-ion clocks because the intensity of the laser (and hence the clock frequency) can vary across the region in which the ions are trapped.

To show that a much lower laser intensity can be used to excite the clock transition in ytterbium-173, the team studied a “Coulomb crystal” in which three ions were trapped in a line and separated by about 10 micron. They illuminated the ions with laser light that was not uniform in intensity across the crystal. They were able to excite the transition at a relatively low laser intensity, which resulted in very small AC Stark shifts between the frequencies of the three ions.

According to the team, this means that as many as 100 trapped ytterbium-173 ions could be used to create a clock that could be used as a time standard; to redefine the second; and also to make very precise measurements of the Earth’s gravitational field.

As well as being useful for creating an optical ion clock, this multi-ion capability could also be exploited to create quantum-computing architectures based on multiple trapped ions. And because the observed effect is a result of the shape of the ytterbium-173 nucleus, further studies could provide insights into nuclear physics.

The research is described in Physical Review Letters.

 

The power of a poster

Most researchers know the disappointment of submitting an abstract to give a conference lecture, only to find that it has been accepted as a poster presentation instead. If this has been your experience, I’m here to tell you that you need to rethink the value of a good poster.

For years, I pestered my university to erect a notice board outside my office so that I could showcase my group’s recent research posters. Each time, for reasons of cost, my request was unsuccessful. At the same time, I would see similar boards placed outside the offices of more senior and better-funded researchers in my university. I voiced my frustrations to a mentor whose advice was, It’s better to seek forgiveness than permission.” So, since I couldn’t afford to buy a notice board, I simply used drawing pins to mount some unauthorized posters on the wall beside my office door.

Some weeks later, I rounded the corner to my office corridor to find the head porter standing with a group of visitors gathered around my posters. He was telling them all about my research using solar energy to disinfect contaminated drinking water in disadvantaged communities in Sub-Saharan Africa. Unintentionally, my illegal posters had been subsumed into the head porter’s official tour that he frequently gave to visitors.

The group moved on but one man stayed behind, examining the poster very closely. I asked him if he had any questions. “No, thanks,” he said, “I’m not actually with the tour, I’m just waiting to visit someone further up the corridor and they’re not ready for me yet. Your research in Africa is very interesting.” We chatted for a while about the challenges of working in resource-poor environments. He seemed quite knowledgeable on the topic but soon left for his meeting.

A few days later while clearing my e-mail junk folder I spotted an e-mail from an Asian “philanthropist” offering me €20,000 towards my research. To collect the money, all I had to do was send him my bank account details. I paused for a moment to admire the novelty and elegance of this new e-mail scam before deleting it. Two days later I received a second e-mail from the same source asking why I hadn’t responded to their first generous offer. While admiring their persistence, I resisted the urge to respond by asking them to stop wasting their time and mine, and instead just deleted it.

So, you can imagine my surprise when the following Monday morning I received a phone call from the university deputy vice-chancellor inviting me to pop up for a quick chat. On arrival, he wasted no time before asking why I had been so foolish as to ignore repeated offers of research funding from one of the college’s most generous benefactors. And that is how I learned that those e-mails from the Asian philanthropist weren’t bogus.

The gentleman that I’d chatted with outside my office was indeed a wealthy philanthropic funder who had been visiting our university. Having retrieved the e-mails from my deleted items folder, I re-engaged with him and subsequently received €20,000 to install 10,000-litre harvested-rainwater tanks in as many primary schools in rural Uganda as the money would stretch to.

Kevin McGuigan

About six months later, I presented the benefactor with a full report accounting for the funding expenditure, replete with photos of harvested-rainwater tanks installed in 10 primary schools, with their very happy new owners standing in the foreground. Since you miss 100% of the chances you don’t take, I decided I should push my luck and added a “wish list” of other research items that the philanthropist might consider funding.

The list started small and grew steadily ambitious. I asked for funds for more tanks in other schools, a travel bursary, PhD registration fees, student stipends and so on. All told, the list came to a total of several hundred thousand euros, but I emphasized that they had been very generous so I would be delighted to receive funding for any one of the listed items and, even if nothing was funded, I was still very grateful for everything he had already done. The following week my generous patron deposited a six-figure-euro sum into my university research account with instructions that it be used as I saw fit for my research purposes, “under the supervision of your university finance office”.

In my career I have co-ordinated several large-budget, multi-partner, interdisciplinary, international research projects. In each case, that money was hard-earned, needing at least six months and many sleepless nights to prepare the grant submission. It still amuses me that I garnered such a large sum on the back of one research poster, one 10-minute chat and fewer than six e-mails.

So, if you have learned nothing else from this story, please don’t underestimate the power of a strategically placed and impactful poster describing your research. You never know with whom it may resonate and down which road it might lead you.

ATLAS narrows the hunt for dark matter

Researchers at the ATLAS collaboration have been searching for signs of new particles in the dark sector of the universe, a hidden realm that could help explain dark matter. In some theories, this sector contains dark quarks (fundamental particles) that undergo a shower and hadronization process, forming long-lived dark mesons (dark quarks and antiquarks bound by a new dark strong force), which eventually decay into ordinary particles. These decays would appear in the detector as unusual “emerging jets”: bursts of particles originating from displaced vertices relative to the primary collision point.

Using 51.8 fb⁻¹ of proton–proton collision data at 13.6 TeV collected in 2022–2023, the ATLAS team looked for events containing two such emerging jets. They explored two possible production mechanisms, which are a vector mediator (Z′) produced in the s‑channel and a scalar mediator (Φ) exchanged in the t‑channel. The analysis combined two complementary strategies. A cut-based strategy relying on high-level jet observables, including track-, vertex-, and jet-substructure-based selections, enables a straightforward reinterpretation for alternative theoretical models. A machine learning approach employs a per-jet tagger using a transformer architecture trained on low-level tracking variables to discriminate emerging from Standard Model jets, maximizing sensitivity for the specific models studied.

No emerging‑jet signal excess was found, but the search set the first direct limits on emerging‑jet production via a Z′ mediator and the first constraints on t‑channel Φ production. Depending on the model assumptions, Z′ masses up to around 2.5 TeV and Φ masses up to about 1.35 TeV are excluded. These results significantly narrow the space in which dark sector particles could exist and form part of a broader ATLAS programme to probe dark quantum chromodynamics. The work sharpens future searches for dark matter and advances our understanding of how a dark sector might behave.

Read the full article

Search for emerging jets in pp collisions at √s = 13.6 TeV with the ATLAS experiment

The ATLAS Collaboration 2025 Rep. Prog. Phys. 88 097801

Do you want to learn more about this topic?

Dark matter and dark energy interactions: theoretical challenges, cosmological implications and observational signatures by B WangE AbdallaF Atrio-Barandela and D Pavón (2016)

How do bacteria produce entropy?

Active matter is matter composed of large numbers of active constituents, each of which consumes chemical energy in order to move or to exert mechanical forces.

This type of matter is commonly found in biology: swimming bacteria or migrating cells are both classic examples. In addition, a wide range of synthetic systems, such as active colloids or robotic swarms, can also fall into this umbrella.

Active matter has therefore been the focus of much research over the past decade, unveiling many surprising theoretical features and a suggesting a plethora of applications.

Perhaps most importantly, these systems’ ability to perform work leads to sustained non-equilibrium behaviour. This is distinctly different from that of relaxing equilibrium thermodynamic systems, commonly found in other areas of physics.

The concept of entropy production is often used to quantify this difference and to calculate how much useful work can be performed. If we want to harvest and utilise this work however, we need to understand the small-scale dynamics of the system. And it turns out this is rather complicated.

One way to calculate entropy production is through field theory, the workhorse of statistical mechanics. Traditional field theories simplify the system by smoothing out details, which works well for predicting densities and correlations. However, these approximations often ignore the individual particle nature, leading to incorrect results for entropy production.

The new paper details a substantial improvement on this method. By making use of Doi-Peliti field theory, they’re able to keep track of microscopic particle dynamics, including reactions and interactions.

The approach starts from the Fokker-Planck equation and provides a systematic way to calculate entropy production from first principles. It can be extended to include interactions between particles and produces general, compact formulas that work for a wide range of systems. These formulas are practical because they can be applied to both simulations and experiments.

The authors demonstrated their method with numerous examples, including systems of Active Brownian Particles, showing its broad usefulness. The big challenge going forward though is to extend their framework to non-Markovian systems, ones where future states depend on the present as well as past states.

Read the full article

Field theories of active particle systems and their entropy production – IOPscience

G. Pruessner and R. Garcia-Millan, 2025 Rep. Prog. Phys. 88 097601

Einstein’s recoiling slit experiment realized at the quantum limit

Quantum mechanics famously limits how much information about a system can be accessed at once in a single experiment. The more precisely a particle’s path can be determined, the less visible its interference pattern becomes. This trade-off, known as Bohr’s complementarity principle, has shaped our understanding of quantum physics for nearly a century. Now, researchers in China have brought one of the most famous thought experiments surrounding this principle to the quantum limit, using a single atom as a movable slit.

The thought experiment dates back to the 1927 Solvay Conference, where Albert Einstein proposed a modification of the double-slit experiment in which one of the slits could recoil. He argued that if a photon caused the slit to recoil as it passed through, then measuring that recoil might reveal which path the photon had taken without destroying the interference pattern. Conversely, Niels Bohr argued that any such recoil would entangle the photon with the slit, washing out the interference fringes.

For decades, this debate remained largely philosophical. The challenge was not about adding a detector or a label to track a photon’s path. Instead, the question was whether the “which-path” information could be stored in the motion of the slit itself. Until now, however, no physical slit was sensitive enough to register the momentum kick from a single photon.

A slit that kicks back

To detect the recoil from a single photon, the slit’s momentum uncertainty must be comparable to the photon’s momentum. For any ordinary macroscopic slit, its quantum fluctuations are significantly larger than the recoil, washing out the which-path information. To give a sense of scale, the authors note that even a 1 g object modelled as a 100 kHz oscillator (for example, a mirror on a spring) would have a ground-state momentum uncertainty of about 10-16 kg m s-1, roughly 11 orders of magnitude larger than the momentum of an optical photon (approximately 10-27 kg m s-1).

Illustration showing the experimental realization

In their study, published in Physical Review Letters, Yu-Chen Zhang and colleagues from the University of Science and Technology of China overcame this obstacle by replacing the movable slit with a single rubidium atom held in an optical tweezer and cooled to its three-dimensional motional ground state. In this regime, the atom’s momentum uncertainty reaches the quantum limit, making the recoil from a single photon directly measurable.

Rather than using a conventional double-slit geometry, the researchers built an optical interferometer in which photons scattered off the trapped atom. By tuning the depth of this optical trap, the researchers were able to precisely control the atom’s intrinsic momentum uncertainty, effectively adjusting how “movable” the slit was.

Watching interference fade 

As the researchers decreased the atom’s momentum uncertainty, they observed a loss of interference in the scattered photons. Increasing the atom’s momentum uncertainty caused the interference to reappear.

This behaviour directly revealed the trade-off between interference and which-path information at the heart of the Einstein–Bohr debate. The researchers note that the loss of interference arose not from classical noise, but from entanglement between the photon and the atom’s motion.

“The main challenge was matching the slit’s momentum uncertainty to that of a single photon,” says corresponding author Jian-Wei Pan. “For macroscopic objects, momentum fluctuations are far too large – they completely hide the recoil. Using a single atom cooled to its motional ground state allows us to reach the fundamental quantum limit.”

Maintaining interferometric phase stability was equally demanding. The team used active phase stabilization with a reference laser to keep the optical path length stable to within a few nanometres (roughly 3 nm) for over 10 h.

Beyond settling a historical argument, the experiment offers a clean demonstration of how entanglement plays a key role in Bohr’s complementarity principle. As Pan explains, the results suggest that “entanglement in the momentum degree-of-freedom is the deeper reason behind the loss of interference when which-path information becomes available”.

This experiment opens the door to exploring quantum measurement in a new regime. By treating the slit itself as a quantum object, future studies could probe how entanglement emerges between light and matter. Additionally, the same set-up could be used to gradually increase the mass of the slit, providing a new way to study the transition from quantum to classical behaviour.

European Space Agency unveils first images from Earth-observation ‘sounder’ satellite

The European Space Agency has released the first images from the Meteosat Third Generation-Sounder (MTG-S) satellite. They show variations in temperature and humidity over Europe and northern Africa in unprecedented detail with further data from the mission set to improve weather-forecasting models and improve measurements of air quality over Europe.

Launched on 1 July 2025 from the Kennedy Space Center in Florida aboard a SpaceX Falcon 9 rocket, MTG-S operates from a geostationary orbit, about 36 000 km above Earth’s surface and is able to provide coverage of Europe and part of northern Africa on a 15-minute repeat cycle.

The satellite carries a hyperspectral sounding instrument that uses interferometry to capture data on temperature and humidity as well as being able to measure wind and trace gases in the atmosphere. It can scan nearly 2,000 thermal infrared wavelengths every 30 minutes.

The data will eventually be used to generate 3D maps of the atmosphere and help improve the accuracy of weather forecasting, especially for rapidly evolving storms.

The “temperature” image, above, was taken in November 2025 and shows heat (red) from the African continent, while a dark blue weather front covers Spain and Portugal.

The “humidity” image, below, was captured using the sounder’s medium-wave infrared channel. Blue colours represent regions in the atmosphere with higher humidity, while red colours correspond to lower humidity.

Whole-Earth image showing cloud formation

“Seeing the first infrared sounder images from MTG-S really brings this mission and its potential to life,” notes Simonetta Cheli, ESA’s director of Earth observation programmes. “We expect data from this mission to change the way we forecast severe storms over Europe – and this is very exciting for communities and citizens, as well as for meteorologists and climatologists.”

ESA is expected to launch a second Meteosat Third Generation-Imaging satellite later this year following the launch of the first one – MTG-I1 – in December 2022.

Uranus and Neptune may be more rocky than icy, say astrophysicists

Our usual picture of Uranus and Neptune as “ice giant” planets may not be entirely correct. According to new work by scientists at the University of Zürich (UZH), Switzerland, the outermost planets in our solar system may in fact be rock-rich worlds with complex internal structures – something that could have major implications for our understanding of how these planets formed and evolved.

Within our solar system, planets fall into three categories based on their internal composition. Mercury, Venus, Earth and Mars are deemed terrestrial rocky planets; Jupiter and Saturn are gas giants; and Uranus and Neptune are ice giants.

An agnostic approach

The new work, which was led by PhD student Luca Morf in UZH’s astrophysics department, challenges this last categorization by numerically simulating the two planets’ interiors as a mixture of rock, water, hydrogen and helium. Morf explains that this modelling framework is initially “agnostic” – meaning unbiased – about what the density profiles of the planets’ interiors should be. “We then calculate the gravitational fields of the planets so that they match with observational measurements to infer a possible composition,” he says.

This process, Morf continues, is then repeated and refined to ensure that each model satisfies several criteria. The first criteria is that the planet should be in hydrostatic equilibrium, meaning that its internal pressure is enough to counteract its gravity and keep it stable. The second is that the planet should have the gravitational moments observed in spacecraft data. These moments describe the gravitational field of a planet, which is complex because planets are not perfect spheres.

The final criteria is that the modelled planets need to be thermodynamically and compositionally consistent with known physics. “For example, a simulation of the planets’ interiors must obey equations of state, which dictate how materials behave under given pressure and temperature conditions,” Morf explains.

After each iteration, the researchers adjust the density profile of each planet and test it to ensure that the model continues to adhere to the three criteria. “We wanted to bridge the gap between existing physics-based models that are overly constrained and empirical approaches that are too simplified,” Morf explains. Avoiding strict initial assumptions about composition, he says, “lets the physics and data guide the solution [and] allows us to probe a larger parameter space.”

A wide range of possible structures

Based on their models, the UZH astrophysicists concluded that the interiors of Uranus and Neptune could have a wide range of possible structures, encompassing both water-rich and rock-rich configurations. More specifically, their calculations yield rock-to-water ratios of between 0.04-3.92 for Uranus and 0.20-1.78 for Neptune.

Diagrams showing possible "slices" of Uranus and Neptune. Four slices are shown, two for each planet. Each slice is filled with brown areas representing silicon dioxide rock and blue areas representing water ice, plus smaller areas of tan colouring for hydrogen-helium mixtures and (for Neptune only) grey areas representing iron. Two slices are mostly blue, while the other two contain large fractions of brown.

The models, which are detailed in Astronomy and Astrophysics, also contain convective regions with ionic water pockets. The presence of such pockets could explain the fact that Uranus and Neptune, unlike Earth, have more than two magnetic poles, as the pockets would generate their own local magnetic dynamos.

Traditional “ice giant” label may be too simple

Overall, the new findings suggest that the traditional “ice giant” label may oversimplify the true nature of Uranus of Neptune, Morf tells Physics World. Instead, these planets could have complex internal structures with compositional gradients and different heat transport mechanisms. Though much uncertainty remains, Morf stresses that Uranus and Neptune – and, by extension, similar intermediate-class planets that may exist in other solar systems – are so poorly understood that any new information about their internal structure is valuable.

A dedicated space mission to these outer planets would yield more accurate measurements of the planets’ gravitational and magnetic fields, enabling scientists to refine the limited existing observational data. In the meantime, the UZH researchers are looking for more solutions for the possible interiors of Uranus and Neptune and improving their models to account for additional constraints, such as atmospheric conditions. “Our work will also guide laboratory and theoretical studies on the way materials behave in general at high temperatures and pressures,” Morf says.

String-theory concept boosts understanding of biological networks

Many biological networks – including blood vessels and plant roots – are not organized to minimize total length, as long assumed. Instead, their geometry follows a principle of surface minimization, following a rule that is also prevalent in string theory. That is the conclusion of physicists in the US, who have created a unifying framework that explains structural features long seen in real networks but poorly captured by traditional mathematical models.

Biological transport and communication networks have fascinated scientists for decades. Neurons branch to form synapses, blood vessels split to supply tissues, and plant roots spread through soil. Since the mid-20th century, many researchers believed that evolution favours networks that minimize total length or volume.

“There is a longstanding hypothesis, going back to Cecil Murray from the 1940s, that many biological networks are optimized for their length and volume,” Albert-László Barabási of Northeastern University explains. “That is, biological networks, like the brain and the vascular systems, are built to achieve their goals with the minimal material needs.” Until recently, however, it had been difficult to characterize the complicated nature of biological networks.

Now, advances in imaging have given Barabási and colleagues a detailed 3D picture of real physical networks, from individual neurons to entire vascular systems. With these new data in hand, the researchers found that previous theories are unable to describe real networks in quantitative terms.

From graphs to surfaces

To remedy this, the team defined the problem in terms of physical networks, systems whose nodes and links have finite thickness and occupy space. Rather than treating them as abstract graphs made of idealized edges, the team models them as geometrical objects embedded in 3D space.

To do this, the researchers turned to an unexpected mathematical tool. “Our work relies on the framework of covariant closed string field theory, developed by Barton Zwiebach and others in the 1980s,” says team member Xiangyi Meng at Rensselaer Polytechnic Institute. This framework provides a correspondence between network-like graphs and smooth surfaces.

Unlike string theory, their approach is entirely classical. “These surfaces, obtained in the absence of quantum fluctuations, are precisely the minimal surfaces we seek,” Meng says. No quantum mechanics, supersymmetry, or exotic string-theory ingredients are required. “Those aspects were introduced mainly to make string theory quantum and thus do not apply to our current context.”

Using this framework, the team analysed a wide range of biological systems. “We studied human and fruit fly neurons, blood vessels, trees, corals, and plants like Arabidopsis,” says Meng. Across all these cases, a consistent pattern emerged: the geometry of the networks is better predicted by minimizing surface area rather than total length.

Complex junctions

One of the most striking outcomes of the surface-minimization framework is its ability to explain structural features that previous models cannot. Traditional length-based theories typically predict simple Y-shaped bifurcations, where one branch splits into two. Real networks, however, often display far richer geometries.

“While traditional models are limited to simple bifurcations, our framework predicts the existence of higher-order junctions and ‘orthogonal sprouts’,” explains Meng.

These include three- or four-way splits and perpendicular, dead-end offshoots. Under a surface-based principle, such features arise naturally and allow neurons to form synapses using less membrane material overall and enable plant roots to probe their environment more effectively.

Ginestra Bianconi of the UK’s Queen Mary University of London says that the key result of the new study is the demonstration that “physical networks such as the brain or vascular networks are not wired according to a principle of minimization of edge length, but rather that their geometry follows a principle of surface minimization.”

Bianconi, who was not involved in the study, also highlights the interdisciplinary leap of invoking ideas from string theory, “This is a beautiful demonstration of how basic research works”.

Interdisciplinary leap

The team emphasizes that their work is not immediately technological. “This is fundamental research, but we know that such research may one day lead to practical applications,” Barabási says. In the near term, he expects the strongest impact in neuroscience and vascular biology, where understanding wiring and morphology is essential.

Bianconi agrees that important questions remain. “The next step would be to understand whether this new principle can help us understand brain function or have an impact on our understanding of brain diseases,” she says. Surface optimization could, for example, offer new ways to interpret structural changes observed in neurological disorders.

Looking further ahead, the framework may influence the design of engineered systems. “Physical networks are also relevant for new materials systems, like metamaterials, who are also aiming to achieve functions at minimal cost,” Barabási notes. Meng points to network materials as a particularly promising area, where surface-based optimization could inspire new architectures with tailored mechanical or transport properties.

The research is described in Nature.

The secret life of TiO₂ in foams

Porous carbon foams are an exciting area of research because they are lightweight, electrically conductive, and have extremely high surface areas. Coating these foams with TiO₂ makes them chemically active, enabling their use in energy storage devices, fuel cells, hydrogen production, CO₂‑reduction catalysts, photocatalysis, and thermal management systems. While many studies have examined the outer surfaces of coated foams, much less is known about how TiO₂ coatings behave deep inside the foam structure.

In this study, researchers deposited TiO₂ thin films onto carbon foams using magnetron sputtering and applied different bias voltages to control ion energy, which in turn affects coating density, crystal structure, thickness, and adhesion. They analysed both the outer surface and the interior of the foam using microscopy, particle‑transport simulations, and X‑ray techniques.

They found that the TiO₂ coating on the outer surface is dense, correctly composed, and crystalline (mainly anatase with a small amount of rutile) ideal for catalytic and energy applications. They also discovered that although fewer particles reach deep inside the foam, those do retain the same energy, meaning particle quantity decreases with depth but particle energy does not. Because devices like batteries and supercapacitors rely on uniform coatings, variations in thickness or structure inside the foam can lead to poorer performance and faster degradation.

Overall, this research provides a much clearer understanding of how TiO₂ coatings grow inside complex 3D foams, showing how thickness, density, and crystal structure evolve with depth and how bias voltage can be used to tune these properties. By revealing how plasma particles move through the foam and validating models that predict coating behaviour, it enables the design of more reliable, higher‑performing foam‑based devices for energy and catalytic applications.

Read the full article

A comprehensive multi-scale study on the growth mechanisms of magnetron sputtered coatings on open-cell 3D foams

Loris Chavée et al 2026 Prog. Energy 8 015002

Do you want to learn more about this topic?

Advances in thermal conductivity for energy applications: a review Qiye Zheng et al. (2021)

Laser processed thin NiO powder coating for durable anode-free batteries

Traditional lithium‑ion batteries use a thick graphite anode, where lithium ions move in and out of the graphite during charging and discharging. In an anode‑free lithium metal battery, there is no anode material at the start, only a copper foil. During the first charge, lithium leaves the cathode and deposits onto the copper as pure lithium metal, effectively forming the anode. Removing the anode increases energy density dramatically by reducing weight, and it also simplifies and lowers the cost of manufacturing. Because of this, anode‑free batteries are considered to have major potential for next‑generation energy storage. However, a key challenge is that lithium deposits unevenly on bare copper, forming long needle‑like dendrites that can pierce the separator and cause short circuits. This uneven growth also leads to rapid capacity loss, so anode‑free batteries typically fail after only a few hundred cycles.

In this research, the scientists coated the copper foil with NiO powder and used a CO₂ laser (l = 10.6 mm) to rapidly heat the same in a rapid scanning mode to transform it. The laser‑treated NiO becomes porous and strongly adherent to the copper, helping lithium spread out more evenly. The process is fast, energy‑efficient, and can be done in air. As a result, lithium ions diffuse or move more easily across the surface, reducing dendrite formation. The exchange current density also doubled compared to bare copper, indicating better charge‑transfer behaviour. Overall, battery performance improved dramatically. The modified cells lasted 400 cycles at room temperature and 700 cycles at 40°C, compared with only 150 cycles for uncoated copper.

This simple, rapid, and scalable technique offers a powerful way to improve anode‑free lithium metal batteries, one of the most promising next‑generation battery technologies.

Read the full article

Microgradient patterned NiO coating on copper current collector for anode-free lithium metal battery

Supriya Kadam et al 2025 Prog. Energy 7 045003

Do you want to learn more about this topic?

Lithium aluminum alloy anodes in Li-ion rechargeable batteries: past developments, recent progress, and future prospects by Tianye Zheng and Steven T Boles (2023)

Copyright © 2026 by IOP Publishing Ltd and individual contributors