Skip to main content

LHCb upgrade: CERN collaboration responds to UK funding cut

Later this year, CERN’s Large Hadron Collider (LHC) and its huge experiments will shutdown for the High Luminosity upgrade. When complete in 2030, the particle-collision rate in the LHC will be increased by a factor of 10 and the experiments will be upgraded so that they can better capture and analyse the results of these collisions. This will allow physicists to study particle interactions at unprecedented precision and could even reveal new physics beyond the Standard Model.

Earlier this year, however, the UK government announced that it will no longer fund the upgrade of the LHCb experiment on the LHC, which is run by a collaboration of more than 1700 physicists worldwide. The UK had promised to contribute about £50 million to the upgrade – which is a significant chunk of the overall cost.

In this episode of the Physics World Weekly podcast I am in conversation with the particle physicist Tim Gershon, who is based at the UK’s University of Warwick. Gershon is spokesperson-elect for the LHCb collaboration and is playing a leading role in the upgrade.

Gershon explains that UK participation and leadership has been crucial for the success of LHCb and cautions that the future of the experiment and the future of UK particle physics have been imperilled by the funding cut.

We also chat about recent discoveries made by LHCb and look forward to what new physics the experiment could find after the upgrade.

Read-out of Majorana qubits reveals their hidden nature

Quantum computers could solve problems that are out of reach for today’s classical machines. However, the quantum states they rely on are prone to decohering – that is, losing their quantum information due to local noise. One possible way around this is to use quantum bits (qubits) constructed from quasiparticle states known as Majorana zero modes (MZMs) that are protected from this noise. But there’s a catch. To perform computations, you need to be able to measure, or read out, the states of your qubits. How do you do that in a system that is inherently protected from its environment?

Scientists at QuTech in the Netherlands, together with researchers from the Madrid Institute of Materials Science (ICMM) in Spain, say they may have found an answer. By measuring a property known as quantum capacitance, they report that they have read out the parity of their MZM system, backing up an earlier readout demonstration from a team at Microsoft Quantum Hardware on a different Majorana platform.

Measuring parity

The QuTech/ICMM researchers generated their MZMs across two quantum dots – semiconductor structures that can confine electrons – connected by a superconducting nanowire. Electrons can transfer, or tunnel, between the quantum dots through this wire. Majorana-based qubits store their quantum information across these separated MZMs, with both elements in the pair required to encode a single “parity” bit. A pair of parity bits (combining four MZMs in total) forms a qubit.

A parity bit has two possible states. When the two quantum dots are in a superposition of both having one electron and both having none, the system is said to have even parity (a “0”). When the system is instead in superposition of only one of the quantum dots having an electron, the parity is said to be odd (a “1”). Importantly, these even and odd parity states have the same average value of electric charge, meaning that a charge sensor cannot tell them apart.

The key to measuring parity lies in the electrons’ behaviour. In the even-parity state, an even number of electrons can pair up and enter the superconductor together as a Cooper pair. In the odd-parity state, however, the lone electron lacks a partner and cannot flow through the wire in the same way. By measuring the charge flowing into the superconductor, the team was therefore able to determine the parity state. The researchers also determined that the lifetimes of these states were in the millisecond range, which they say is promising for quantum computations.

Competing platforms

According to Nick van Loo, a quantum engineer at QuTech and the first author of a Nature paper on the work, similar chains of quantum dots (known as Kitaev chains) are a promising platform for realizing Majorana modes because each element in the chain can be controlled and tuned. This control, he adds, makes results easier to reproduce, helping to overcome some of the interpretation challenges that have affected Majorana results over the past decade.

Van Loo also stresses that his team uses a different architecture from the Microsoft Quantum Hardware team to create its Majorana modes – one that he says allows for better tuneability as well as easier and more scalable readout. He adds that this architecture also allows an independent charge sensor to be used to confirm the MZM’s charge neutrality.

In response, Chetan Nayak, a technical fellow at Microsoft Quantum Hardware, says it is important that the QuTech/ICMM team independently measured a millisecond time scale for parity fluctuations. However, he notes that the team did not extend this parity lifetime and adds that the so-called “poor man’s Majoranas” used in this research do not constitute a scalable platform for topological qubits, as they lack topological protection.

Seeking full protection

Van Loo acknowledges that the team’s two-site Kitaev chain is not topologically protected. However, he says the degree of protection is expected to improve exponentially as more sites are added. In the near term, he and his colleagues hope to operate their qubit by inducing rotations through coupling pairs of Majorana modes. Once these hurdles are overcome, he tells Physics World that “one major milestone will still remain: demonstrating braiding of Majorana modes to establish their non-Abelian exchange statistics”.

Jay Deep Sau, a physicist at the University of Maryland, US, who was not involved in either the QuTech/ICMM or the Microsoft Quantum Hardware research, describes this as the first measurement of fermion parity in the smallest quantum dot chain platform for creating MZMs. Compared to the Microsoft result, Sau agrees that the quantum dot chain is more controlled. However, he is sceptical that this control will apply to larger chains, casting doubt on whether this is truly a scalable way of realizing MZMs. The significance of these results, he adds, will only be apparent if the quantum dot chain approach can demonstrate a coherent qubit before its semiconductor nanowire counterpart.

Quantum-secure Internet expands to citywide scale

Researchers in China have distributed device-independent quantum cryptographic keys over city-scale distances for the first time – a significant improvement compared to the previous record of a few hundred metres. Led by Jian-Wei Pan of the University of Science and Technology of China (USTC) of the Chinese Academy of Sciences (CAS), the researchers say the achievement brings the world a step closer to a completely quantum-secure Internet.

Many of us use Internet encryption almost daily, for example when transferring sensitive information such as bank details. Today’s encryption techniques use keys based on mathematical algorithms, and classical supercomputers cannot crack them in any practical amount of time. Powerful quantum computers could change this, however, which has driven researchers to explore potential alternatives.

One such alternative, known as quantum key distribution (QKD), encrypts information by exploiting the quantum properties of photons. The appeal of this approach is that when quantum-entangled photons transmit a key between two parties, any attempted hack by a third party will be easy to detect because their intervention will disturb the entanglement.

While the basic form of QKD enables information to be transmitted securely, it does have some weak points. One of them is that a malicious third party could steal the key by hacking the devices the sender and/or receiver is using.

A more advanced version of QKD is device-independent QKD (DI-QKD). As its name suggests, this version does not depend on the state of a device. Instead, it derives its security key directly from fundamental quantum phenomena – namely, the violation of conditions known as Bell’s inequalities. Establishing this violation ensures that a third party has not interfered with the process employed to generate the secure key.

The main drawback of DI-QKD is that it is extremely technically demanding, requiring high-quality entanglement and an efficient means of detecting it. “Until now, this has only been possible over short distances – 700 m at best – and in laboratory-based proof-of-principle experiments,” says Pan.

High-fidelity entanglement over 11 km of fibre

In the latest work, Pan and colleagues constructed two quantum nodes consisting of single trapped atoms. Each node was equipped with four high-numerical-aperture lenses to efficiently collect single photons emitted by the atoms. These photons have a wavelength of 780 nm, which is not optimal for transmission through optical fibres. The team therefore used a process known as quantum frequency conversion to shift the emitted photons to a longer wavelength of 1315 nm, which is less prone to optical loss in fibres.

By interfering and detecting a single photon, the team was able to generate what’s known as heralded entanglement between the two quantum nodes – something Pan describes as “an essential resource” for DI-QKD. While significant progress has been made in extending the entangling distance for qubits of this type, Pan notes that these advances have been hampered by low fidelities and low entangling rates.

To address this, Pan and his colleagues employed a single-photon-based entangling scheme that boosts remote entangling probability by more than two orders of magnitude. They also placed their atoms in highly excited Rydberg states to generate single photons with high purity and low noise. “It is these innovations that allow us to achieve high-fidelity and high-rate entanglement over a long distance,” Pan explains.

Using this setup, the researchers explored the feasibility of performing DI-QKD between two entangled atoms linked by optical fibres up to 100 km in length. In this study, which is detailed in Science, they demonstrated practical DI-QKD under finite-key security over 11 km of fibre.

Metropolitan-scale quantum key distribution

Based on the technologies they developed, Pan thinks it could now be possible to implement DI-QKD over metropolitan scales with existing optical fibres. Such a system could provide encrypted communication with the highest level of physical security, but Pan notes that it could also have other applications. For example, high-fidelity entanglement could also serve as a fundamental building block for constructing quantum repeaters and scaling up quantum networks.

Carlos Sabín, a physicist at the Autonomous University of Madrid (UAM), Spain, who was not involved in the study, says that while the work is an important step, there is still a long way to go before we are able to perform completely secure and error-free quantum key distribution on an inter-city scale. “This is because quantum entanglement is an inherently fragile property,” Sabín explains. “As light travels through the fibre, small losses accumulate and the entanglement generated is of poorer quality, which translates into higher error rates in the cryptographic keys generated. Indeed, the results of the experiment show that errors in the key range from 3% when the distance is 11 km to more than 7% for 100 km.”

Pan and colleagues now plan to add more atoms to each node and to use techniques like tweezer arrays to further enhance both the entangling rate and the secure key rate over longer distances. “We are aiming for 1000 km, over which we hope to incorporate quantum repeaters,” Pan tells Physics World. “By using processes like ‘entanglement swapping’ to connect a series of such two-node entanglement, we anticipate that we will be able maintain a similar entangling rate for much longer distances.”

Todd McNutt: how an AI software solution enables creation of the best possible radiation treatment plans

Todd McNutt is a radiation oncology physicist at Johns Hopkins University in the US and the co-founder of Oncospace, where he led the development of an artificial intelligence (AI)-powered tool that simultaneously accelerates radiation planning and elevates plan quality and consistency. The software, now rebranded as Plan AI and available from US manufacturer Sun Nuclear, draws upon data from thousands of previous radiotherapy treatments to predict the lowest possible dose to healthy tissues for each new patient. Treatment planners then use this information to define goals that streamline and automate the creation of a best achievable plan.

Physics World’s Tami Freeman spoke with McNutt about the evolution of Oncospace and the benefits that Plan AI brings to radiotherapy patients and cancer treatment centres.

Can you describe how the Oncospace project began?

Back in 2007, several groups were discussing how we could better use clinical data for discovery and knowledge generation. I had several meetings with folks at Johns Hopkins, including Alex Szalay who helped develop the Sloan Digital Sky Survey. He built a large database of galaxies and stars and it became a huge research platform for both amateur and professional astronomers.

From that discussion, and other initiatives, we looked at moving towards structured data collection for patients in the clinical environment. By marrying these data with radiation treatment plans we could study how dose distributions across the anatomy affect patient outcomes. And we took that opportunity to build a database for radiotherapy.

What inspired the transition from academic research to founding the company Oncospace Inc in 2019?

After populating the database with data from many patients, we could examine which anatomic features impact our ability to generate a plan that minimizes radiation dose to normal tissues while treating target volumes as best as possible. We came up with a feature set that characterized the relationships between normal anatomy and targets, as well as target complexity.

This early work allowed us to predict expected doses from these shape-relationship features, and it worked well. At that point, we knew we could tap into this database and generate a prediction that could help create treatment plans for new patients. We thought of this as personalized medicine: for the first time, we could see the level of treatment plan quality that we could achieve for a specific patient.

I thought that this was useful commercially and that we should get it out to other clinics. Praveen Sinha, who I’d known from my previous work at Philips and now leads Sun Nuclear’s software business line, asked if I wanted to create a startup. The timing was right for both of us and I had a team here ready to go, so we went ahead and did it. With his knowledge of startups and my knowledge of what we wanted to achieve, we had perfect timing and a perfect group to work with.

Plan AI enables both predictive planning and peer review, how do these functions work?

The idea behind predictive planning is that, for a given patient, I can predict the expected dose that I should be able to achieve for them.

Plan AI software
Dose–volume histograms

Treatment planning involves specifying dosimetric objectives to the planning system and asking it to optimize radiation delivery to meet these. But nobody really knows what the right objectives even are – it is just a trial-and-error process. Plan AI’s prediction provides a rational set of objectives for plan optimization, allowing the planning system’s algorithm to move towards a good solution and making treatment planning an easier problem to solve.

Peer review involves a peer physician looking at every treatment plan to evaluate it for quality and safety. But again, people don’t really know the level of quality you can generate, it depends on the patient’s anatomy. Providing a predicted dose with clinical dose goals enables a rapid review to see whether it is a high-quality plan or not.

In the past we looked at simple things like whether a contour is missing slices or contains discontinuities and Plan AI checks for this, but you can do far more with AI. For example, you could look at all the contoured rectums in the system and predict if your contour goes too far into the sigmoid colon, then it may be mis-contoured. We have research software that can flag such potential anomalies so they don’t get overlooked.

The Plan AI models are developed using Oncospace’s database of previous treatments; can you describe this data lake?

When we first started, we developed a large SQL database containing all the shape-relationship features and dosimetry features. The SQL language is ideal for being able to query and sift through the data, but when the company was formed, we recognized that there was some age to that technology.

So for the Plan AI data lake, we extracted all the different shape-relationship and shape-complexity features and put them into a Parquet database in the cloud. This made the data lake much more amenable to applying machine learning algorithms to it. The SQL data lake at Johns Hopkins is maintained separately and primarily used to investigate toxicity predictions and spatial dose patterns. But for Plan AI, the models are fixed and streamlined for the specific task of dose prediction.

What does the model training process entail?

One of the first tasks was to curate the data, using the AAPM’s standardized structure-naming model. Our data scientist Julie Shade wrote some tools for automatic name mapping and target identification; that helped us process much larger amounts of data for the model.

Once we had all the shape-relationship and shape-complexity features and all the doses, we trained the models by anatomical region. We have FDA-approved models for the male and female pelvis, thorax, abdomen and head-and-neck. For each of these, we predict the doses for every organ-at-risk. Then we used a five-fold validation model to make sure that the predictions were good on an internal data set.

We also performed external validation at institutions including Johns Hopkins and Montefiore hospitals. We created predicted plans from recent treatment plans that had been evaluated by physicians. For almost all cases, both plan quality and plan efficiency were improved with Plan AI.

One aspect of this training is that whenever we drive optimization via predictive planning we want to push towards the best achievable dose. Regular machine learning predicts an expected, or average, dose across all patients. But you never want to drive a treatment plan towards the average dose, because then every plan you generate will be happy being average. Our model predicts both the average and the best achievable dose, and drives plan optimization towards the best achievable.

When implementing new technology in the clinic, it’s important to fit into the existing treatment workflow. How clinic-ready are these AI tools?

Radiation therapy is protocol-driven: we know what technique we’re going to use to treat and what our clinical dose goals are for different structures. What we don’t know is the patient-specific part of that. So for each anatomical region, we built models out of a wide range of treatment protocols, with many different types of patients, to ensure that the same prediction model works for any protocol. This means a user can use any protocol for treatment and the predictions will work, they don’t have to retrain anything. It’s ready to go out of the box, there’s a library of protocols to start with, and you can change protocols as you need for your own clinic.

The other part of being clinic-ready is aligning with the way that planning is currently performed, which is using dose–volume histograms. Treatment plans are optimized by manipulating these dose objectives, and that’s exactly what we predict. So users aren’t changing the whole paradigm of how planners operate. They still use their treatment planning system (TPS) – we just put the objectives in there. Basically, a TPS script sends the patient’s CT and contours to the cloud, where Plan AI makes the predictions. The TPS then pulls back in the objectives built from the models, based on this specific patient’s anatomy. The TPS runs the optimization and, as a last step, can send the plan back to Plan AI to check that it fits within the best achievable predictions.

Did you encounter any challenges bringing AI into a clinical setting?

Interestingly, the challenges aren’t technical, they are more human related. One of the more systemic challenges is data security when using medical data for training. A nice thing about our system is that the features we generate from treatment plans are just mathematical shape-relationship features and don’t involve a lot of identifiable information.

AI has been used in radiation therapy for image contouring and auto-segmentation, and early efforts were not so good. So, there’s always a good, healthy scepticism. But once you show people that it works and works well, this can be overcome. I have seen some people worried about job security and AI taking over. We are medical professionals designing a treatment plan to care for a patient and there’s a lot of pride and art in that – if you automate that, it takes away some of this pride and art.

I tell people that if we automate the easier things, then they can spend their quality time on the more difficult and challenging cases, because that’s where their talent might be needed more.

Do you have any advice for clinics looking to adopt AI-driven planning?

Introduce it as an assistant, not as a solution. You want people that already know what they’re doing to be able to use their knowledge more efficiently. We want to make their jobs easier and show them that it also improves quality.

With dosimetrists, for example, they create a plan and work hard getting the dose down – and then the physician looks at it and suggests that they can do better. Predictive planning gives them confidence that they are right and takes the uncertainty out of the physician review process. And once you’ve gained that level of confidence, you can start using it for adaptive planning or other technologies.

Where do you see predictive modelling and AI in oncology in five years from now?

Right now, there’s been a lot of data collected, but we want that data to advance and learn. Having multiple centres adding to this pool of knowledge and being able to continually update those models from new, broader data sets could be of huge value.

In terms of patient outcomes, we’ve done a lot of the work looking at how the spatial pattern of dose impacts toxicity and outcomes. This is part of the research being performed at Johns Hopkins and still in discovery mode. But down the road, some of these predictions of normal tissue outcomes could be fed into the planning process to help reduce toxicity at the patient level.

Finally, what’s been the most rewarding part of this journey for you?

During my prior experience building treatment planning systems, the biggest problem was always that nobody knew what the objective was. Nobody knew how to tell the system: “this is the dose I expect to receive, now optimize to get it for me”, because you didn’t know what you could do. For any given patient, you could ask for too much or too little. Now, for the first time, I argue that we actually know what our objective is in our treatment planning.

This levels the playing field between different environments, different countries, or even different dosimetrists with different levels of experience. The Plan AI tool brings all this to a consistent state and enables high quality, efficient planning everywhere. We can provide this predictive planning tool to clinics around the world. Now we just have to get everybody using it.

 

The future of particle physics: what can the past teach us?

In his opening remarks to the 4th International Symposium on the History of Particle Physics, Chris Llewellyn Smith – who was a director-general of CERN in the 1990s – suggested participants should speak about “what’s not written in the journals”, including “mistakes, dead-ends and problems with getting funding”. Doing so, he said, would “provide insight into the way science really progresses”.

The symposium was not your usual science conference. Held last November at CERN, it took place inside the lab’s 400-seat main auditorium, which has been the venue for many historic announcements, including the discovery of the Higgs boson. Its brown-beige walls are covered with lively designs by the Finnish artist Ilona Rista, suggesting to me the aftermath of a collision of high-energy bar codes.

The 1980s and 1990s saw the construction and operation of various important accelerators and detectors.

The focus of the meeting was the development of particle physics in the 1980s and 1990s – a period that saw the construction and operation of various important accelerators and detectors. At CERN, these included the UA1 and UA2 experiments at the Super Proton Synchrotron, where the W and Z bosons were discovered. Later, there was the Large Electron-Positron Collider (LEP), which came online in 1989, and the Large Hadron Collider (LHC), approved five years later.

Delegates also heard about the opening of various accelerators in the US during those two decades, including two at the Stanford Linear Accelerator Center – the Positron-Electron Project in 1980 and the Stanford Linear Collider in 1989. Most famous of all was the start-up of the Tevatron at Fermilab in 1983. Over at Dubna in the former Soviet Union, meanwhile, scientists built the Nuclotron, a superconducting synchrotron, which opened in 1992.

Conference speakers covered unfinished machines of the era as well. The US cancelled two proton–proton facilities – ISABELLE in 1983 and the Superconducting Super Collider (SSC) a decade later. The Soviet Union, meanwhile, abandoned the multi-TeV proton–proton collider UNK a few years later, though news has recently emerged that Russia might revive the project.

Several speakers recounted the discovery of the W and Z particles at CERN in 1983 and the discovery of the top quark at Fermilab in 1995. Others addressed the strange fact that fewer neutrinos from the Sun had been detected than theory suggested. The “solar-neutrino problem”, as it was known, was finally resolved by Takaaki Kajita’s discovery of neutrino oscillation in 1998, for which he shared the 2015 Nobel Prize for Physics with Art McDonald.

The conference also addressed unsuccessful searches for proton decay, axions, magnetic monopoles, the Higgs boson, supersymmetry particles and other targets. Other speakers described projects with highly positive outcomes, such as the advent of particle cosmology, or what some have jokingly dubbed “the heavenly lab”. The development of string theory, grand unified theories and perturbative quantum chromodynamics was tackled too.

In an exchange in the question-and-answer session after one talk, the Greek physicist Kostas Gavroglu referred to many of such quests as “failures”. That remark prompted the Australian-born US theoretical physicist Helen Quinn to say she preferred the term “falling forward”; such failures, she said, were instances of “I tried this, and it didn’t work so I tried that”.

In relating his work on detecting gravitational waves, the US Nobel-prize-winning physicist Barry Barish said he felt his charge was not to celebrate the importance of his discoveries nor the ingenuity of the route he took. Instead, Barish explained, his job was to answer the much more informal question: “What made me do what?”.

His point was illustrated by the US theorist Alan Guth, who described the very human and serendipitous path he took to working on cosmic inflation – the super-fast expansion of the universe just after the Big Bang. When he started, Guth said, “all the ingredients were already invented”. But the startling idea of inflation hinged on accidental meetings, chance conversations, unexpected visits, a restricted word count for Physical Review Letters, competitions, insecurities and “spectacular realizations” coalescing.

Wider world

Another theme that arose in the conference was that science does not unfold inside its own bubble but can have extensive and immediate impacts on the world around it. Two speakers, for instance, recounted the invention of the World Wide Web at CERN in the late 1980s. It’s fair to say that no other discovery by a single individual – Tim Berners-Lee – has so radically and quickly transformed the world.

The growing role of international politics in promoting and protecting projects was mentioned too, with various speakers explaining how high-level political negotiations enabled physicists to work at institutions and experiments in other nations. The Polish physicist Agnieszka Zalewska, for example, described her country’s path to membership in CERN, while Russian-born US physicist Vladimir Shiltsev spoke about the “diaspora” of Russian particle physicists after the fall of the Soviet Union in 1991.

As a result of the Superconducting Super Colllider’s controversial closure, the centre of gravity of high-energy physics shifted to Europe.

Sometimes politics created destructive interference. The US physicist, historian and author Michael Riordan described how the US’s determination to “go it alone” to outcompete Europe in high-energy physics was a major factor in bringing about the opposite: the termination of the SSC in 1993. As a result of that project’s controversial closure, the centre of gravity of high-energy physics shifted to Europe.

Indeed, contemporary politics occasionally hit the conference itself in incongruous and ironic ways. Two US physicists, for example, were denied permission to attend because budgets had been cut and travel restrictions increased. In the end, one took personal time off and paid his own way, leaving his affiliation off the programme.

Before the conference, some people complained that conference organizers hadn’t paid enough attention to physicists who’d worked in the Soviet Union but were from occupied republics. Several speakers addressed this shortcoming by mentioning people like Gersh Budker (1918–1977). A Ukrainian-born physicist who worked and died in the Soviet Union, Budker was nominated for a Nobel Prize (1957) and even has had a street named after him at CERN. Unmentioned, though, was that Budker was Jewish and that his father was killed by Ukrainian nationalists in a pogrom.

On the final day of the conference, which just happened to be World Science Day for Peace and Development, CERN mounted a public screening of the 2025 documentary film The Peace Particle. Directed by Alex Kiehl, much of it was about CERN’s internationalism, with a poster for the film describing the lab as “Mankind’s biggest experiment…science for peace in a divided world”.

But in the Q&A afterwards some audience members criticized CERN for allegedly whitewashing Russia for its invasion of the Ukraine and Israel for genocide. Those onstage defended CERN on the grounds of its desire to promote internationalism.

The critical point

The keynote speaker of the conference was John Krige, a science historian from Georgia Tech who has worked on a three-volume history of CERN. Those who launched the lab, Krige reminded the audience, had radical “scientific, political and cultural aspirations” for the institution. Their dream was that CERN wouldn’t just revive European science and promote regional collaborative effects after the Second World War, but also potentially improve the global world order too.

Krige went on to quote one CERN founder, who’d said that international science facilities such as CERN would be “one of the best ways of saving Western civilization”. Recent events, however, have shown just how fragile those ambitions are. Alluding to CERN’s Future Circular Collider and other possible projects, Llewellyn Smith ended his closing remarks with a warning.

“The perennial hope that the next big high-energy project will be genuinely global,” he said, “seems to be receding over the horizon due to the polarization of world politics”.

A breakthrough in modelling open quantum matter

Attempts to understand quantum phase transitions in open systems usually rely on real‑time Lindbladian evolution, which tracks how a quantum state changes as it relaxes toward a steady state. This approach is powerful for studying decoherence, dissipation and long‑time behaviour, but it often fails to reveal the deeper structure of the system including the phase transitions, critical points and hidden quantum order that define its underlying physics.

In this work, the researchers introduce a new framework called imaginary‑time Lindbladian evolution, which allows them to define and classify quantum phases in open systems using the spectrum of an imaginary‑Liouville superoperator. This approach works not only for pure ground states but also for finite‑temperature Gibbs states of stabilizer Hamiltonians, showing its relevance for realistic, mixed‑state conditions.

A key diagnostic in their method is the imaginary‑Liouville gap, the spectral gap between the lowest and next‑lowest decay modes. When this gap closes, the system undergoes a phase transition, a change that is accompanied by diverging correlation lengths and nonanalytic shifts in physical observables. The closing of this gap also coincides with the divergence of the Markov length, a recently proposed indicator of criticality in open quantum systems.

To demonstrate the power of their framework, the researchers map out phase diagrams for systems with

Z2σ×Z2τ

symmetry, capturing both spontaneous symmetry breaking and average symmetry‑protected topological phases. Their method reveals universal critical behaviour that real‑time Lindbladian steady states fail to detect, highlighting why imaginary‑time evolution fills a missing piece in the theory of open‑system phases.

Importantly, the authors emphasise that real‑time Lindbladians remain essential for modelling dissipation in practical settings. Their new framework complements this conventional approach, offering a systematic way to study phase transitions in open systems. They also outline how phase diagrams can be constructed using both bottom‑up (state‑based) and top‑down (Hamiltonian‑based) strategies, illustrating the method with a dissipative transverse‑field Ising model.

Overall, this work provides a unified and versatile way to understand quantum phases in open systems, revealing critical behaviour and topological structure that were previously inaccessible. It opens new directions for studying mixed‑state quantum matter and advances the theoretical foundations needed for future quantum technologies.

Read the full article

A new framework for quantum phases in open systems: steady state of imaginary-time Lindbladian evolution

Yuchen Guo et al 2025 Rep. Prog. Phys. 88 118001

Do you want to learn more about this topic?

Focus on Quantum Entanglement: State of the Art and Open Questions guest edited by Anna Sanpera and Carlo Marconi (2025-2026)

How reversibility becomes irreversible

In the macroscopic world, we see irreversible processes everywhere, heat flowing from hot to cold, gases mixing, systems decaying. Yet at the microscopic level, quantum mechanics is perfectly reversible, with its equations running equally well forwards and backwards in time. How then, does irreversibility emerge from fundamentally reversible dynamics?

A common explanation is coarse-graining, which simplifies a complex system by ignoring microscopic details and focusing only on large-scale behaviour. To make the micro–macro divide precise, however, one must first define what “macroscopic” means. Here it is given a quantitative inferential meaning: a state is macroscopic if it is perfectly inferable from the perspective of a specified measurement and prior. Central to this framework is a coarse-graining map built from the measurement and its optimal Bayesian recovery via the Petz map; macroscopic states are precisely its fixed points, turning macroscopicity into a sharp condition of perfect inferability. This construction is grounded in Bayesian retrodiction, which infers what a system likely was before it was measured, together with an observational deficit that quantifies how much information is lost in forming a macroscopic description.

States that are macroscopically inferable can be characterised in several equivalent ways, all tied to to a new measure of disorder called macroscopic entropy, which captures how irreversible, or “uninferable”, a macroscopic process appears from the observer’s perspective. This perspective is formalised through inferential reference frames, built from the combination of a prior and a measurement, which determine what an observer can and cannot recover about the underlying quantum state.

The researchers also develop a resource theory of microscopicity, treating macroscopic states as free and identifying the operations that cannot generate microscopic detail. This unifies and extends existing resource theories of coherence, athermality, and asymmetry. They further introduce observational discord, a new way to understand quantum correlations when observational power is limited, and provide conditions for when this discord vanishes.

Altogether, this work reframes macroscopic irreversibility as an information-theoretic phenomenon, grounded not in a fundamental dynamical asymmetry but in an inferential asymmetry arising from the observer’s limited perspective. It offers a unified way to understand coarse-graining, entropy, and the emergence of classical behaviour from quantum mechanics. It deepens our understanding of time’s direction and has implications for quantum computing, thermodynamics, and the study of quantum correlations in realistic, constrained settings.

Read the full article

Macroscopicity and observational deficit in states, operations, and correlations

Teruaki Nagasawa et al 2025 Rep. Prog. Phys. 88 117601

Do you want to learn more about this topic?

Focus on Quantum Entanglement: State of the Art and Open Questions guest edited by Anna Sanpera and Carlo Marconi (2025-2026)

Visible light paints patterns onto chiral antiferromagnets

Researchers at Los Alamos National Laboratory in New Mexico, US have used visible light to both image and manipulate the domains of a chiral antiferromagnet (AFM). By “painting” complex patterns onto samples of cobalt niobium sulfite (Co1/3NbS2), they demonstrated that it is possible to control AFM domain formation and dynamics, boosting prospects for data storage devices based on antiferromagnetic materials rather than the ferromagnetic ones commonly used today.

In antiferromagnetic materials, the spins of neighbouring atoms in the material’s lattice are opposed to each other (they are antiparallel). For this reason, they do not exhibit a net magnetization in the absence of a magnetic field. This characteristic makes them largely immune to disturbances from external magnetic fields, but it also makes them all but invisible to simple electrical and optical probes, and extremely difficult to manipulate.

A special structure

In the new work, a Los Alamos team led by Scott Crooker focused on Co1/3NbS2 because of its topological nature. In this material, layers of cobalt atoms are positioned, or intercalated, between monolayers of niobium disulfide, creating 2D triangular lattices with ABAB stacking. The spins of these cobalt atoms point either toward or away from the centers of the tetrahedra formed by the atoms. The result is a noncoplanar spin ordering that produces a chiral, or “handed,” spin texture.

This chirality affects the motion of electrons in the material because when an electron passes through a chiral pattern of spins, it picks up a geometrical phase known as a Berry phase. This makes it move as if it were “seeing” a region with a real magnetic field, giving the material a nonzero Hall conductivity which, in turn, affects how it absorbs circularly polarized light.

Characterizing a topological antiferromagnet

To characterize this behaviour, the researchers used an optical technique called magnetic circular dichroism (MCD) that measures the difference in absorption between left and right circularly polarized light and depends explicitly on the Hall conductivity.

Similar to the MCD that is measured in well-known ferromagnets such as iron or nickel, the amplitude and sign of the MCD measured in Co1/3NbS2 varied as a function of the wavelength  of the light.  This dependence occurs because light prompts optical transitions between filled and empty energy bands. “In more complex materials like this, there is a whole spaghetti of bands, and one needs to consider all of them,” Crooker explains. “Precisely which mix of transitions are being excited depends of course on the photon energy, and this mix changes with energy. Sometimes the net response is positive, sometimes negative; it just depends on the details of the band structure.”

To understand the mix of transitions taking place, as well as the topological character of those transitions, scientists use the concept of Berry curvature, which is the momentum-space version of the magnetic field-like effect described earlier. If the accumulated Berry phase is positive (negative), then the electron is moving in a right-handed (left-handed) spin texture chirality, which is captured by the Berry curvature of the band structure in momentum space.

Imaging and painting chiral AFM domains

To image directly the domains with positive and negative chirality, the researchers cooled the sample below its ordering temperature, shined light of a particular wavelength onto it, and measured its MCD using a scanning MCD microscope. The sign of the measured MCD value revealed the chirality of the AFM domains.

To “write” a different chirality into these AFM domains, the researchers again cooled the sample below its ordering temperature, this time in the presence of a small positive magnetic field B, which fixed the sample in a positive chiral AFM state. They then reversed the polarity of B and illuminated a spot of the sample to heat it above the ordering temperature. Once the spot cooled down, the negative-polarity B-field changed the AFM state in the illuminated region into a negative chirality. When the “painting” was finished, the researchers imaged the patterns with the MCD microscope.

In the past, a similar thermo-magnetic scheme gave rise to ferromagnetic-based data storage disks. This work, which is published in Physical Review Letters, marks the first time that light has been used to manipulate AFM chiral domains – a fundamental requirement for developing AFM-based information storage technology and spintronics.  In the future, Crooker says the group plans to extend this technique to characterize other complex antiferromagnets with nontrivial magnetic configurations, use light to “write” interesting spatial patterns of chiral domains (patterns of Berry phase), and see how this influences electrical transport.

Green concrete: paving the way for sustainable structures

Grey, ugly, dull. Concrete is not the most exciting material in the world. That is, until you start to think about its impact on our lives. Concrete is the second most consumed material on the planet after water. Humanity uses about 30 billion tonnes of the stuff every year, the equivalent of building an entire new New York City every month. Put another way, there is so much concrete in the world and so much being made that by the 2040s it will outweigh all living matter.

As the son of a builder, I have made a few concrete mixes over the years myself, usually following my father’s tried and trusted recipe. Take one part cement (fine mineral powder), two parts sand, and four parts aggregate (crushed stone), then mix and add enough water until it all goes gloopy.

The ubiquity and low cost of these simple ingredients are just two of the reasons for concrete’s global reach. In liquid form, it can be moulded into almost any shape, and once set, it is as hard and durable as stone. What’s more, it doesn’t burn, rot or get eaten by animals.

These factors make concrete the ideal material for everything from vast imposing dams to sleek kitchen floors. However, its gargantuan presence across society comes at an equally epic environmental cost. If concrete were a country, it would rank third behind only the US and China as a greenhouse gas emitter.

Though raw material processing and transport of concrete are part of the problem, concrete’s biggest environmental impact comes from the heat and chemical processes involved in producing cement. Ordinary cement clinker (the raw form of cement before it is ground to a powder) is the product of heating limestone up to 1450 °C until it breaks apart into lime and carbon dioxide (CO2). This heating requires lots of energy and the chemical process releases huge amounts of the greenhouse gas CO2 – meaning that cement makes up around 90% of the carbon footprint of an average concrete mix.

Cement factory at twilight

In the UK and some other parts of the world, this climate impact is well recognized, with the industry having made significant efforts to decarbonize over the last few decades. “Since 1990, the UK concrete industry has decreased its direct and indirect environmental impacts by over 53% through various technology levers,” says Elaine Toogood – an architect and senior director at the Mineral Products Association’s Concrete Centre, the UK’s technical hub for all things concrete.

This reduction has been achieved through actions such as fuel switching, decarbonizing electricity and transport networks, and carbon capture technology. “For example, over 50% of all the heat that’s needed to make cement is now supplied by waste-derived fuels,” Toogood adds.

Yet the sheer scale of the global concrete industry means that much more needs to be done to fully mitigate concrete’s carbon impact. Can physics, and more specifically AI, lend a hand?

Low-carbon replacements

Replacing cement – concrete’s least green ingredient – with low-carbon alternatives seems like a good place to start. Two well-proven options have been available for decades.

Fly ash – the by-product of burning coal at power plants – can replace about 30% of cement in concrete mixes. It has been used in the construction of many prominent structures including the Channel Tunnel, which opened in 1994. Blast furnace slag – the by-product of iron and steel production – is another capable replacement, and can make up to 70% of cement content. Slag was used in 2009 to substitute half of the regular cement in the precast concrete units that now make up the sea defences on Blackpool beach.

Yet although these waste materials are currently extensively used as cement or concrete additions in the UK and elsewhere, they rely on very polluting sources (coal-fired power plants and blast furnaces) that are gradually being phased out globally to meet climate targets. As a result, fly ash and blast furnace slag are not long-term solutions. New low-carbon materials are needed, which is where physics can play a decisive role.

Based in Debre Tabor University in Ethiopia, Gashaw Abebaw Adanu is an expert in innovative construction materials. In 2021 he and colleagues investigated the potential of partially replacing (0%, 5%, 10%, 15% and 20%) standard cement with ash from burning lowland Ethiopian bamboo leaf, a common local construction waste material (Adv. Civ. Eng 10.1155/2021/6468444). The findings were encouraging. Though the concrete took longer to set with increased bamboo leaf ash content, the material’s strength, water absorption and sulphate attack (concrete breakdown caused by sulphate ions reacting with the hardened cement paste) improved for 5–10% bamboo leaf ash mixes. The results suggest that up to 10% of cement could be swapped for this local low-carbon alternative.

Steel, copper – or hair?

More recently, Adanu has turned his focus to concrete fibre reinforcement. Adding small amounts of steel, copper or polyethene fibres is known to increase concrete’s ductility and crack resistance by up to 200% and 90%, respectively. The tiny fibres act like micro-stitches throughout the entire mix, transforming concrete from a brittle material into a tough, energy-absorbing composite.

Fibre reinforcement also leads to major cost savings and a reduced carbon footprint, primarily by removing the need for traditional steel rebar and mesh, where 50 kg of steel fibres can often do the work of 100 kg+ of traditional rebar. Eliminating this expensive material also reduces labour and maintenance costs.

In his latest research, Adanu has explored an unexpected alternative fibre reinforcement material that would decrease costs further as it would otherwise go to landfill: human hair (Eng. Res. Express 7 015115). Adanu took waste hair from barbershops in Debre Tabor (with permission, of course), and added small amounts of it in different quantities to standard concrete mixes. “It’s not biodegradable, it’s not compostable, but as a fibre reinforcement material, we found that using 1–2% human hair improves the concrete’s tensile strength, compressive strength, cracking resistance and reduces shrinkage,” says Adanu. “It makes concrete more clean and sustainable, and because it improves the quality of the concrete, it reduces cost at the same time.”

Research like Adanu’s, involving experimentation with local materials, has been the driving force for innovation in construction for millennia. From the ancient Neolithic practice of boosting mudbricks’ strength by adding local straw, to the Romans using volcanic dust as high-quality cement for concrete constructions like the Pantheon in Rome – a structure that still stands to this day, with its 43.3-m diameter non-reinforced concrete dome remaining the largest in the world. But testing one material at a time is no longer the only way.

Four photos of concrete buildings

Taking a more modern, wide-ranging approach, a team of researchers led by Soroush Mahjoubi and Elsa Olivetti of Massachusetts Institute of Technology (MIT), recently mined the cement and concrete literature, and a database of over one million rock samples, looking for cement ingredient substitutes (Communications Materials 6 99). The study not only confirmed the potential of the well-known alternatives fly ash and metallurgic slags, but also various biomass ashes like the bamboo leaf ash Adanu investigated, as well as rice husk, sugarcane bagasse, wood, tree bark and palm oil fuel ashes.

The meta-review in addition identified various other waste materials with high potential. These include construction and demolition wastes (ceramics, bricks, concrete), waste glass, municipal solid waste incineration ashes, and mine tailings (iron ore, copper, zinc), as well as 25 igneous rock types that could significantly reduce cement’s carbon impact.

AI to the rescue

Although a number of these alternative concrete materials have been known for some time, they have struggled to make an impact, with very few being used to partially replace regular cement in ready-mix concretes. Getting construction companies or concrete contractors to give them a try is no simple task.

“Concrete contractors are used to using certain mixes for certain jobs at certain times of the year, so they can plan a site and project based on how those materials are going to behave,” says Toogood. “Newer mixes act slightly differently when fresh,” she adds, which makes life tricky for those running a construction site, where concrete that behaves in a predictable manner is critical so that things run smoothly and efficiently.

Two physicists – Raphael Scheps and Gideon Farrell – aim to build this trust in low-carbon alternatives through their UK construction technology company Converge. Starting out using sensors to measure the real-time performance of different mixes of concrete in situ, they have built one of the world’s largest datasets on the performance of concrete.

Two photos of sensors on building sites - a macro shot of a probe and a wider shot of a person wearing hi-vis and a hard hat crouched on a concrete surface

They can now apply an AI model underpinned by physics principles. The program simulates the physical and chemical interactions of different components to predict the performance of a vast number of concrete mixes in a wide range of situations to a high level of accuracy. And this is key, as it builds trust to experiment with lower-carbon mixes. “With projects in the UK and Australia, we’ve helped people tweak the mix that they’re using and achieve quite major carbon savings,” says Scheps. “Anywhere from 10% all the way up to 44%.”

Currently used to recommend existing cost-saving concrete recipes, Scheps sees Converge’s AI model becoming more sophisticated over time. “As it starts to uncover the real fundamental physics-based rules for what drives concrete chemistry, our model will make projections for entirely new materials,” he enthuses.

Also exploring the power of AI to optimize concrete production is US company Concrete.ai. Like Converge, Concrete.ai was born from the idea of applying physics principles to optimize traditional materials and industries; specifically, how AI can be used to reduce the carbon footprint of concrete. And also like Converge, the company’s technology rests on one of the world’s largest concrete databases, consisting of vast amounts of different recipes and materials, alongside their associated performances.

Trained on this dataset, Concrete.ai’s generative AI model creates millions of possible mix designs to identify the optimal concrete recipe for any particular application. “The main difference between a solution like Concrete.ai’s and general models like ChatGPT or Gemini is that our goal is really to create recipes that don’t exist yet,” explains chief technology officer and co-founder Mathieu Bauchy. “Popular large language models regurgitate what they have been trained on and tend to hallucinate, whereas our model discovers new recipes that have never been produced before without breaking the laws of physics or chemistry, and in a reliable way.”

Bauchy sees Concrete.ai’s role as a bridge between concrete producers keen to cut their costs and carbon footprint, and innovators like Adanu or the MIT group exploring new low-carbon concrete materials who are unable to demonstrate the performance of these materials in real-world scenarios and at scale.

Circular benefits

It is perhaps apt that the industry most in need of AI insights from the likes of Converge, Concrete.ai and their growing number of competitors is the AI industry itself. New data centres being used to train, deploy and deliver AI applications and services are the cause of a huge spike in the greenhouse gas emissions of tech giants such as Google, Meta, Microsoft and Amazon. And one of the biggest contributors to those emissions is the concrete from which these hyperscale facilities are built.

Aerial view of large industrial building complex next to a solar farm

This is the reason Meta recently partnered with concrete maker Amrize to develop AI-optimized concrete. For Meta’s new 66,500 m2 data centre in Rosemount, Minnesota, the partners applied Meta’s AI models and Amrize’s materials-engineering expertise to deliver concrete that met key criteria including high strength and low carbon content, as well as practical performance characteristics like decent cure speed and surface quality. The partners estimate that the custom mix will reduce the total carbon footprint of this concrete by 35%.

“There is an interesting synergy between concrete and AI,” says Bauchy. “AI can help design greener concrete, and on the other hand, concrete can be used to build more sustainable data centres to power AI.” With other tech giants exploring AI’s potential in reducing the carbon footprint of the concrete they use too, it may well be that the very places in which AI is developed become the testbeds for AI-derived sustainable green concrete solutions.

New journal aims to advance the interdisciplinary field of personalized health

Personalized health – the use of individualized measurements to address each patient’s specific needs – is a research field that’s evolving at pace. Bringing this level of personalization into the clinic is an interdisciplinary challenge, requiring the development of sensors that generate clinically meaningful data outside the hospital, new imaging modalities and analysis techniques, and computational tools that address the uncertainties of dealing with just one individual.

Much of the most impactful work in this field sits in the spaces between established disciplines. And for researchers looking to publish their findings or read about the latest breakthroughs, this work is often scattered across discipline-specific journals. A new open access journal from IOP Publishing – Medical Sensors & Imaging (MSI) – aims to remedy this shortfall, providing a dedicated home for authors working across sensing, imaging, modelling and data-driven healthcare.

Medical Sensors & Imaging

“We want a journal where physicists, engineers, computer scientists, biomedical researchers and clinicians can publish and read work that advances personalized health, without confinement into traditional silos,” explains founding editor-in-chief Marco Palombo from Cardiff University. “MSI also aims to play an important role in strengthening interdisciplinary exchange.”

“The community needs a specialized forum that doesn’t just report on new materials or a clinical trial, but validates innovations that can specifically solve complex biomedical challenges,” adds deputy editor Xiliang Luo from Qingdao University of Science and Technology. “I think this journal is a perfect fit for that gap.”

Connecting communities

Published by IOP Publishing on behalf of the Institute of Physics and Engineering in Medicine (IPEM), MSI aims to dismantle the barriers between engineering innovation and clinical application by creating a community of experts that work together to translate innovative technology into clinical settings.

MSI sits within IPEM’s journal portfolio that includes Physics in Medicine & Biology, Physiological Measurement and Medical Engineering & Physics. Its aims and scope were designed to complement, rather than overlap with, these existing journals and provide a dedicated venue for translational work and practical applied research that may otherwise struggle to fit a traditional scope.

Marco Palombo

Being part of this established family of journals brings with it strong editorial standards, an established readership base and a commitment to scientific integrity. The journal also offers rapid, high-quality peer review, with feedback that’s constructive, rigorous and fair. MSI is fully open access, which maximizes the visibility, reach and impact of its published papers.

“For a new journal in a dynamic field, ensuring content is discoverable and barrier-free is essential for building an audience quickly and establishing credibility,” says Palombo. “We also wanted MSI to support global participation. Many excellent groups operate with limited budgets but make major scientific contributions. Open access reduces inequities in who can read and build on published work.”

“For the authors, we can provide a specialized platform for scientists whose work transcends traditional boundaries, offering visibility to a broad audience that’s eager for translational solutions,” says Luo. “And for the readers, I think we will be the go-to resource for academic researchers, industry R&D leaders, and healthcare innovators seeking the latest breakthroughs in personalized health monitoring and advanced diagnostics.”

Hot topics

Palombo contributed to the strategic development of the journal at an early stage, drawing upon his experience in healthcare and medical imaging research and engaging with the research community to identify the scientific niche that MSI could fill. Working with IOP Publishing, he helped shape the journal’s aims and scope and assembled a diverse, internationally recognized editorial board with knowledge aligned with the journal’s mission – including Lui, who brings specialist expertise in wearable technologies and biosensors.

Xiliang Luo

The journal will publish high-quality research on novel biomedical sensing and imaging techniques, along with the algorithms, validation frameworks and translational studies that demonstrate their application in real-world medicine. MSI also provides a platform to showcase research on hot topics such as wearable and implantable sensors for continuous physiological monitoring, for example, or microneedle-based sensing technologies and breath analysis.

The development of flexible and biocompatible materials will be key for the growth of bio-integrated devices and biodegradable or transient electronics, as will anti-fouling strategies that enable use of sensors in complex biological environments. On the imaging side, the journal scope encompasses mainstay medical imaging techniques such as MRI, CT, ultrasound, PET and SPECT, as well as emerging multimodal and hybrid approaches, with a focus on technical innovation and translational relevance.

“Given my own background, I’m particularly keen to see strong submissions in the area of MRI, including advanced quantitative biomarkers and approaches that probe tissue microstructure,” notes Palombo. “I also see huge potential in connecting imaging to computational modelling – particularly digital twins – and in building imaging pipelines that enable personalized diagnosis and prognosis.”

“Other exciting areas include combining sensing and imaging technologies into one system, and closed-loop ‘sense then act’ systems, which sense something and can then release medicine to treat the disease,” says Luo.

The rise of AI

Artificial intelligence (AI) is becoming increasingly central to both sensing and imaging, and will likely play a major role in the evolution of personalized health, enabling a shift towards multimodal fusion of sensor streams, imaging and clinical data. AI could also facilitate the introduction of integrated sensor systems that collect data and interpret signals in real time, and digital twins that link patient-specific data with computational models to simulate disease progression or treatment response.

Palombo emphasizes the importance of trustworthy AI: methods that don’t just provide an output, but are explainable, robust and explicitly handle uncertainty. This is a direction seen in the general field of AI, but is especially important within healthcare. He also cites the increasing momentum around green healthcare and green AI, with personalized health technologies designed to reduce waste and minimize energy consumption, and clinical models developed with far greater computational efficiency.

“It would be fantastic to have an AI model running directly on the sensor, for example, and this ties in with the environmental impact of AI,” he explains. “If we keep AI small and manageable, then it pollutes less, is more affordable for everybody and can be deployed on small, lightweight devices.”

A community focal point

Looking ahead, Palombo hopes that MSI will becomes a leading platform for interdisciplinary innovation in personalized health, and the routine home for publishing major advances in sensing, imaging, modelling and trustworthy AI. “Over time, I’d like the journal to build depth in core areas, while also actively shaping emerging directions such as digital twins, uncertainty-aware and explainable AI, multimodal integration and technologies that are genuinely deployable in clinical workflows.”

“Currently, the fields of sensor engineering and clinical medicine often run on parallel tracks. My hope is that this journal will force these tracks to converge over time,” adds Luo. “I see the journal fostering a new language where chemists, physicists, engineers and doctors can understand each other by publishing papers in MSI.”

  • Medical Sensors & Imaging is fully open for submissions, with the first issue expected to publish in Q2/Q3 of this year. During the launch phase, IOPP is covering the article processing charge (APC) for all accepted papers, enabling early contributors to publish at no cost while helping the journal establish a strong foundation of high-quality inaugural content. Beyond this period, many authors will benefit from support through IOPP’s transformative agreements, while others may be eligible for APC waivers and discounts.

 

Copyright © 2026 by IOP Publishing Ltd and individual contributors