Skip to main content

Bubbles, foams and self-assembly: a conversation with Early Career Award winner Aurélie Hourlier-Fargette

Congratulations on winning the 2025 JPhys Materials Early Career Award. What does this mean for you at this stage of your career?

I am really grateful to the Editorial Board of JPhys Materials for this award and for highlighting our work. This is a key recognition for the whole team behind the results presented in this research paper. We were taking a new turn in our research with this topic – trying to convince bubbles to assemble into crystalline structures towards architected materials – and this award is an important encouragement to continue pushing in this direction. At the crossroads of physics, physical chemistry, materials science and mechanics, we hope that this is only the beginning of our interdisciplinary journey around bubble assemblies and foam-based materials.

Your research explores elasto-capillarity and foam architectures, what inspired you to work in this fascinating area?

I always say that research is a series of encounters – with people, and with scientific themes and objects. I was lucky to discover this interdisciplinary world as an undergraduate, during an internship on elasto-capillarity at the intersection of physics and mechanics. The scientific communities working on these topics – and also on foams – are fantastic. In both fields, I was fortunate to meet talented people who inspired my future work, combining scientific skills and creativity.

In France, the GDR MePhy (mechanics and physics of complex systems) played a key role in broadening my perspective, by organizing workshops on many different topics, always with interdisciplinarity in mind.

You have demonstrated mechanically guided self-assembly of bubbles leading to crystalline foam structures. What’s the significance of this finding and how could it impact materials design?

In the paper, part of the journal’s Emerging Leaders collection, we provide a proof-of-concept with alginate and polyurethane materials to demonstrate that it is possible to use a fibre array to order bubbles into a crystalline structure, which can be tuned by choosing the fibre pattern, and to keep this ordering upon solidification to provide an alternative approach to additive manufacturing. This work is mainly fundamental, and we hope it paves the way toward a wider use of mechanical self-assembly principles in the context of porous architected materials.

The use of solidifying materials for those studies is two-fold: first, it allows us to observe the systems with X-ray microtomography once solidified, and second, it demonstrates that we could use such techniques to build actual solid materials.

Guiding bubbles with fibre arrays

What excites you most about this field right now, and where do you see the biggest opportunities for breakthroughs?

Combining physical understanding and materials science is certainly a great area of opportunity to better exploit mechanical self-assembly. It is very compelling to search for strategies based on physical principles to generate materials with non-trivial mechanical or acoustic properties. Capillarity, elasticity, stimuli-induced modification of systems, as well as geometrical considerations, all offer a great playground to explore. Curiosity-driven research has many advantages, and often, unexpected observations completely reshape the trajectory that we had in mind.

Could you tell us about your team’s current research priorities and the directions you are most focused on?

We believe that focusing first on the underlying physical principles, especially in terms of mechanical self-assembly, will provide the building blocks to generate novel materials. One key research axis we are exploring now is widening the range of materials that can be used for “liquid foam templating” (a general approach that involves controlling the properties of a foam in its liquid state to control the resulting properties of the foam after solidification). We focus on the solidification mechanisms, either by playing with external stimuli or by controlling the solidification reactions via the introduction of catalysts or solidifying agents.

What are the key challenges in achieving ordered structures during solidification?

Liquid foams provide beautiful hierarchical structures that are also short-lived. To take advantage of the mechanical self-assembly of bubbles to build solid materials, understanding the relevant timescales is key: depending on whether the foam has time to drain and destabilize before solidification or not, its final morphology can be completely different. Controlling both the ageing mechanisms and the solidification of the matrix is particularly challenging.

How do you see foam-based materials impacting real-world applications?

Both biomedical devices and soft robots often rely on soft materials – either to match the mechanical properties of biological tissues or to provide the mechanical properties to build soft robots to enable motion. Being able to customize self-assembled hierarchical structures could allow us to explore a wider range of even softer materials, with specific properties resulting from their structural features. Applications could also extend to stiffer materials, mainly in the context of acoustic properties and wave propagation in such architected structures.

What are the most surprising behaviours you have observed during the processes of self-assembly and solidification of foams?

For the experiments detailed in the paper, the structures revealed their beauty once the X-ray tomography scans were performed. When we varied the parameters, we could only guess what was going to happen before getting the visual confirmation a few hours later. We were really happy to see that changing the pattern of the fibre array could indeed provide different ordered foam structures. In some other projects we are working on, foam stability has been a real challenge. We were sometimes surprised to obtain long-lasting liquid systems.

X-ray tomography scans of foams

Looking ahead, what are the next big questions you hope to tackle in your field?

In the fundamental context of the physics and mechanics of elasto-capillarity, the study of model systems involving self-assembly mechanisms will be a key aspect of our research. I then hope to successfully identify key applications for such architected systems – mainly in the fields of mechanical or acoustic metamaterials, but also for biomedical engineering. Regarding foam solidification, understanding the mechanisms of pore opening during the solidification process – leading to either closed-cell or open-cell foams – is also an important question for the community.

You worked on bio-integrated electronics during your postdoc and contributed to a seminal paper on skin-interfaced biosensors for wireless monitoring in neonatal ICUs. How has that shaped your current research interests?

That fantastic experience allowed me to work in a group with numerous people from many different backgrounds, pushing the frontiers of interdisciplinarity in ways I could not have imagined before joining the Rogers group as a postdoc. At the moment, I am focusing on more fundamental questions, but it is definitely important to keep in mind what physics and materials science can bring to a broad variety of applications that offer solutions for society, in biomedical engineering and beyond.

Your research often combines theory and experiment and involves interdisciplinary collaboration. How do you see these collaborations shaping the future of your field?

It is always the scientific questions we want to answer – or the goals we aim to achieve – that should define the collaborations, bringing together multiple skills and backgrounds to tackle a shared challenge. Clearly, at the intersection of physics, physical chemistry, materials science and mechanics, there are many interesting questions that require contributions from different disciplines and skillsets. A key aspect is how people trained in different areas learn to “speak the same language” in order to advance interdisciplinary topics.

X-ray microtomography on the MINAMEC platform

How do you envision your research evolving over the next 5–10 years?

I hope to be able to combine fundamental research and meaningful applications successfully – perhaps in the form of medical devices or tools for soft robots. There are many exciting possibilities, but it is certainly still too early for me to predict.

What advice would you give early-career researchers pursuing interdisciplinary projects?

Believe in what you are doing! We push boundaries more easily in areas we are passionate about, and we are also more productive when we work on topics for which we have found a supportive environment – with a unique combination of collaborators and access to state-of-the-art equipment.

In research, and especially in interdisciplinary fields, a key challenge is finding the right balance: you need to stay focused on the research projects that matter for you, while also keeping an open mind and staying aware of what others are doing. This broader vision helps you understand how your work integrates into a larger, more complex landscape.

Finally, what inspires you most as a scientist, and what keeps you motivated during challenging phases of research?

I have always liked working with desktop-scale experiments, where we can touch the objects and have an intuition for the physical mechanisms behind the observed phenomena.

Another source of inspiration is the beauty of the scientific objects that we study. With droplets, bubbles and foams – which are not only scientifically interesting but also beautiful – there is a strong connection with art and photography.

And finally, a key aspect of our professional life is the people we work with. It is clearly an additional motivation to feel part of a community where we can discuss both scientific questions and ways to improve how research is organized, as well as help younger students, PhDs and postdocs find their professional path. Working with amazing colleagues definitely helps when the path is longer or more difficult than expected.

From bunkers to bright spaces: the future of smart shielded radiosurgery treatment rooms

Want to learn more on this subject?

This webinar explores how smart shielding is transforming the design of Leksell Gamma Knife radiosurgery environments, shifting from bunker‑like spaces to open, patient‑centric treatment rooms. Drawing from dose‑rate maps, room‑dimension considerations and modern shielding innovations, we’ll demonstrate how treatment rooms can safely incorporate features such as windows and natural light, improving both functionality and patient experience.

Dr Riccardo Bevilacqua will walk through the key questions that clinicians, planners and hospital administrators should ask when evaluating new builds or upgrading existing treatment rooms. We will highlight how modern shielding approaches expand design possibilities, debunk outdated assumptions and offer practical guidance on evaluating sites and educating stakeholders on what lies “beyond bunkers”.

Want to learn more on this subject?

Dr Riccardo Bevilacqua

Dr Riccardo Bevilacqua, a nuclear physicist with a PhD in neutron data for Generation IV nuclear reactors from Uppsala University, has worked as a scientist for the European Commission and at various international research facilities. His career has transitioned from research to radiation safety and back to medical physics, the field that first interested him as a student in Italy. Based in Stockholm, Sweden, he leads global radiation‑safety initiatives at Elekta. Outside of work, Riccardo is a father, a stepfather and writes popular‑science articles on physics and radiation.

The physics of why basketball shoes are so squeaky

If you have ever watched a basketball match, you will know that along with the sound of the ball being bounced, there is also the constant squeaking of shoes as the players move across the court.

Such noise is a common occurrence in everyday life from the scraping of chalk on a blackboard to when brakes are applied on a bicycle.

Physicists in France, Israel, the UK and the US have now recreated the phenomenon in a lab and discovered that the squeaking is due to a previously unseen mechanism.

Katia Bertoldi from the Harvard John A. Paulson School of Engineering and Applied Sciences and colleagues slid a basketball shoe, or a rubber sample, across a smooth glass plate and used high-speed imaging and audio measurements to analyse the squeak.

Previous studies looking at the effect suggested that “pulses” are created when two materials “stick and slip”, but such studies focused on slow movements, which do not create squeaks.

Bertoldi and team instead found that the noise was not caused by random stick-slip events, but rather deformations of the rubber sole pulsing in bursts, or rippling, across the surface.

In this case, small parts of the sole change shape and lose and regain contact with the surface, with the “ripple” travelling at near supersonic speeds.

The pitch of the squeak even matches the rate of the “bursts”, which is determined by the stiffness and thickness of the shoe sole.

The authors also found that if a soft surface is smooth, the pulses are irregular and produce no sharp sounds, whereas ridged surfaces – like the grip patterns on sports shoes – produce consistent pulse frequencies, resulting in a high-pitched squeak.

In another twist, lab experiments showed that in some instances, the slip pulses are triggered by triboelectric discharges – miniature lightning bolts caused by the friction of the rubber.

Indeed, the physics of these pulses share similar features with fracture fronts in plate tectonics, and so a better understanding the dynamics that occur between two surfaces may offer insights into  friction across a range of systems.

“These results bridge two fields that are traditionally disconnected: the tribology of soft materials and the dynamics of earthquakes,” notes Shmuel Rubinstein from Hebrew University. “Soft friction is usually considered slow, yet we show that the squeak of a sneaker can propagate as fast as, or even faster than, the rupture of a geological fault, and that their physics is strikingly similar.”

Dark optical cavity alters superconductivity

An international team of researchers has shown that superconductivity can be modified by coupling a superconductor to a dark electromagnetic cavity. The research opens the door to the control of a material’s properties by modifying its electromagnetic environment.

Electronic structure defines many material properties – and this means that some properties can be changed by applying electromagnetic fields. The destruction of superconductivity by a magnetic field and the use of electric fields to control currents in semiconductors are two familiar examples.

There is growing interest in how electronic properties could be controlled by placing a material in a dark electromagnetic cavity that resonates with an electronic transition in that material. In this scenario, an external field is not applied to the material. Rather, interactions occur via quantum vacuum fluctuations within the cavity.

Holy Grail

“The Holy Grail of cavity materials research is to alter the properties of complex materials by engineering the electromagnetic environment,” explains the team – which includes Itai Keren, Tatiana Webb and Dmitri Basov at Columbia University in the US.

They created an optical cavity from a small slab of hexagonal boron nitride. This was interfaced with a slab of κ-ET, which is an organic low-temperature superconductor. The cavity was designed to resonate with an infrared transition in κ-ET involving the vibrational stretching of carbon–carbon bonds.

Hexagonal boron nitride was chosen because it is a hyperbolic van der Waals material. Van der Waals materials are stacks of atomically-thin layers. Atoms are strongly bound within each layer, but the layers are only weakly bound to each other by the van der Waals force. The gaps between layers can act as waveguides, confining light that bounces back and forth within the slab. As a result the slab behaves like an optical cavity with an isofrequency surface that is a hyperboloid in momentum space. Such a cavity supports a large number of modes and vacuum fluctuations, which enhances interactions with the superconductor.

Superfluid suppression

The researchers found that the presence of the cavity caused a strong suppression of superfluid density in κ-ET (a superconductor can be thought of as a superfluid of charged particles). The team mapped the superfluid density using magnetic force microscopy. This involved placing a tiny magnetic tip near to the surface of the superconductor. The magnetic field of the tip cannot penetrate into the superconductor (the Meissner effect) and this results in a force on the tip that is related to the superfluid density. They found that the density dropped by as much as 50% near the cavity interface.

The team also investigated the optical properties of the cavity using scattering-type scanning near-field optical microscope (s-SNOM). This involves firing tightly-focused laser light at an atomic force microscope (AFM) tip that is tapping on the surface of the cavity. The scattered light is processed to reveal the near-field component of light from just the region of the cavity below the tip .

The tapping tip creates phonon polaritons in the cavity, which are particle-like excitations that couple lattice vibrations to light. Analysing the near-field light across the cavity confirmed that the carbon stretching mode of κ-ET is coupled to the cavity. Calculations done by the team suggest that cavity coupling reduces the amplitude of the stretching mode vibrations.

Physicists know that superconductivity can arise from interactions between electrons and phonons (lattice vibrations), So, it is possible that the reduction in superfluid density is related to the suppression of stretching-mode vibrations. This, however, is not certain because κ-ET is an unconventional superconductor, which means that physicists do not understand the mechanism that causes its superconductivity. Further experiments could therefore shed light on the mysteries of unconventional superconductors.

“We are confident that our experiments will prompt further theoretical pursuits,” the team tells Physics World. The researchers also believe that practical applications could be possible. “Our work shows a new path towards the manipulation of superconducting properties.”

The research is described in Nature.

Chernobyl at 40: physics, politics and the nuclear debate today

On 26 April 2026, it will be 40 years since the explosion at Unit 4 of the Chernobyl Nuclear Power Plant – the worst nuclear accident the world has known. In the early hours of 26 April 1986, a badly designed reactor, operated under intense pressure during a safety test, ran out of control. A powerful explosion and prolonged fire followed, releasing radioactive material across Ukraine, Belarus, Russia, with smaller quantities spewing across Europe.

In this episode of Physics World Stories, host Andrew Glester speaks with Jim Smith, an environmental physicist at the University of Portsmouth. Smith began his academic life studying astrophysics, but always had an interest in environmental issues. His PhD in applied mathematics at Liverpool focused on modelling how radioactive material from Chernobyl was transported through the atmosphere and deposited as far away as the Lake District in north-western England.

Smith recounts his visits to the abandoned Chernobyl plant and the 1000-square-mile exclusion zone, now home to roaming wolves and other thriving wildlife. He wants a rational debate about the relative risks, arguing that the accident’s social and economic consequences have significantly outweighed the long-term impacts of radiation itself.

The discussion ranges from the politics of nuclear energy and the hierarchical culture of the Soviet system, to lessons later applied during the Fukushima accident. Smith makes the case for nuclear power as a vital complement to renewables.

He also shares the story behind the Chernobyl Spirit Company – a social enterprise he has launched with Ukrainian colleagues, producing safe, high-quality spirits to support Ukrainian communities. Listen to find out whether Andrew Glester dared to try one.

LHCb upgrade: CERN collaboration responds to UK funding cut

Later this year, CERN’s Large Hadron Collider (LHC) and its huge experiments will shutdown for the High Luminosity upgrade. When complete in 2030, the particle-collision rate in the LHC will be increased by a factor of 10 and the experiments will be upgraded so that they can better capture and analyse the results of these collisions. This will allow physicists to study particle interactions at unprecedented precision and could even reveal new physics beyond the Standard Model.

Earlier this year, however, the UK government announced that it will no longer fund the upgrade of the LHCb experiment on the LHC, which is run by a collaboration of more than 1700 physicists worldwide. The UK had promised to contribute about £50 million to the upgrade – which is a significant chunk of the overall cost.

In this episode of the Physics World Weekly podcast I am in conversation with the particle physicist Tim Gershon, who is based at the UK’s University of Warwick. Gershon is spokesperson-elect for the LHCb collaboration and is playing a leading role in the upgrade.

Gershon explains that UK participation and leadership has been crucial for the success of LHCb and cautions that the future of the experiment and the future of UK particle physics have been imperilled by the funding cut.

We also chat about recent discoveries made by LHCb and look forward to what new physics the experiment could find after the upgrade.

Read-out of Majorana qubits reveals their hidden nature

Quantum computers could solve problems that are out of reach for today’s classical machines. However, the quantum states they rely on are prone to decohering – that is, losing their quantum information due to local noise. One possible way around this is to use quantum bits (qubits) constructed from quasiparticle states known as Majorana zero modes (MZMs) that are protected from this noise. But there’s a catch. To perform computations, you need to be able to measure, or read out, the states of your qubits. How do you do that in a system that is inherently protected from its environment?

Scientists at QuTech in the Netherlands, together with researchers from the Madrid Institute of Materials Science (ICMM) in Spain, say they may have found an answer. By measuring a property known as quantum capacitance, they report that they have read out the parity of their MZM system, backing up an earlier readout demonstration from a team at Microsoft Quantum Hardware on a different Majorana platform.

Measuring parity

The QuTech/ICMM researchers generated their MZMs across two quantum dots – semiconductor structures that can confine electrons – connected by a superconducting nanowire. Electrons can transfer, or tunnel, between the quantum dots through this wire. Majorana-based qubits store their quantum information across these separated MZMs, with both elements in the pair required to encode a single “parity” bit. A pair of parity bits (combining four MZMs in total) forms a qubit.

A parity bit has two possible states. When the two quantum dots are in a superposition of both having one electron and both having none, the system is said to have even parity (a “0”). When the system is instead in superposition of only one of the quantum dots having an electron, the parity is said to be odd (a “1”). Importantly, these even and odd parity states have the same average value of electric charge, meaning that a charge sensor cannot tell them apart.

The key to measuring parity lies in the electrons’ behaviour. In the even-parity state, an even number of electrons can pair up and enter the superconductor together as a Cooper pair. In the odd-parity state, however, the lone electron lacks a partner and cannot flow through the wire in the same way. By measuring the charge flowing into the superconductor, the team was therefore able to determine the parity state. The researchers also determined that the lifetimes of these states were in the millisecond range, which they say is promising for quantum computations.

Competing platforms

According to Nick van Loo, a quantum engineer at QuTech and the first author of a Nature paper on the work, similar chains of quantum dots (known as Kitaev chains) are a promising platform for realizing Majorana modes because each element in the chain can be controlled and tuned. This control, he adds, makes results easier to reproduce, helping to overcome some of the interpretation challenges that have affected Majorana results over the past decade.

Van Loo also stresses that his team uses a different architecture from the Microsoft Quantum Hardware team to create its Majorana modes – one that he says allows for better tuneability as well as easier and more scalable readout. He adds that this architecture also allows an independent charge sensor to be used to confirm the MZM’s charge neutrality.

In response, Chetan Nayak, a technical fellow at Microsoft Quantum Hardware, says it is important that the QuTech/ICMM team independently measured a millisecond time scale for parity fluctuations. However, he notes that the team did not extend this parity lifetime and adds that the so-called “poor man’s Majoranas” used in this research do not constitute a scalable platform for topological qubits, as they lack topological protection.

Seeking full protection

Van Loo acknowledges that the team’s two-site Kitaev chain is not topologically protected. However, he says the degree of protection is expected to improve exponentially as more sites are added. In the near term, he and his colleagues hope to operate their qubit by inducing rotations through coupling pairs of Majorana modes. Once these hurdles are overcome, he tells Physics World that “one major milestone will still remain: demonstrating braiding of Majorana modes to establish their non-Abelian exchange statistics”.

Jay Deep Sau, a physicist at the University of Maryland, US, who was not involved in either the QuTech/ICMM or the Microsoft Quantum Hardware research, describes this as the first measurement of fermion parity in the smallest quantum dot chain platform for creating MZMs. Compared to the Microsoft result, Sau agrees that the quantum dot chain is more controlled. However, he is sceptical that this control will apply to larger chains, casting doubt on whether this is truly a scalable way of realizing MZMs. The significance of these results, he adds, will only be apparent if the quantum dot chain approach can demonstrate a coherent qubit before its semiconductor nanowire counterpart.

Quantum-secure Internet expands to citywide scale

Researchers in China have distributed device-independent quantum cryptographic keys over city-scale distances for the first time – a significant improvement compared to the previous record of a few hundred metres. Led by Jian-Wei Pan of the University of Science and Technology of China (USTC) of the Chinese Academy of Sciences (CAS), the researchers say the achievement brings the world a step closer to a completely quantum-secure Internet.

Many of us use Internet encryption almost daily, for example when transferring sensitive information such as bank details. Today’s encryption techniques use keys based on mathematical algorithms, and classical supercomputers cannot crack them in any practical amount of time. Powerful quantum computers could change this, however, which has driven researchers to explore potential alternatives.

One such alternative, known as quantum key distribution (QKD), encrypts information by exploiting the quantum properties of photons. The appeal of this approach is that when quantum-entangled photons transmit a key between two parties, any attempted hack by a third party will be easy to detect because their intervention will disturb the entanglement.

While the basic form of QKD enables information to be transmitted securely, it does have some weak points. One of them is that a malicious third party could steal the key by hacking the devices the sender and/or receiver is using.

A more advanced version of QKD is device-independent QKD (DI-QKD). As its name suggests, this version does not depend on the state of a device. Instead, it derives its security key directly from fundamental quantum phenomena – namely, the violation of conditions known as Bell’s inequalities. Establishing this violation ensures that a third party has not interfered with the process employed to generate the secure key.

The main drawback of DI-QKD is that it is extremely technically demanding, requiring high-quality entanglement and an efficient means of detecting it. “Until now, this has only been possible over short distances – 700 m at best – and in laboratory-based proof-of-principle experiments,” says Pan.

High-fidelity entanglement over 11 km of fibre

In the latest work, Pan and colleagues constructed two quantum nodes consisting of single trapped atoms. Each node was equipped with four high-numerical-aperture lenses to efficiently collect single photons emitted by the atoms. These photons have a wavelength of 780 nm, which is not optimal for transmission through optical fibres. The team therefore used a process known as quantum frequency conversion to shift the emitted photons to a longer wavelength of 1315 nm, which is less prone to optical loss in fibres.

By interfering and detecting a single photon, the team was able to generate what’s known as heralded entanglement between the two quantum nodes – something Pan describes as “an essential resource” for DI-QKD. While significant progress has been made in extending the entangling distance for qubits of this type, Pan notes that these advances have been hampered by low fidelities and low entangling rates.

To address this, Pan and his colleagues employed a single-photon-based entangling scheme that boosts remote entangling probability by more than two orders of magnitude. They also placed their atoms in highly excited Rydberg states to generate single photons with high purity and low noise. “It is these innovations that allow us to achieve high-fidelity and high-rate entanglement over a long distance,” Pan explains.

Using this setup, the researchers explored the feasibility of performing DI-QKD between two entangled atoms linked by optical fibres up to 100 km in length. In this study, which is detailed in Science, they demonstrated practical DI-QKD under finite-key security over 11 km of fibre.

Metropolitan-scale quantum key distribution

Based on the technologies they developed, Pan thinks it could now be possible to implement DI-QKD over metropolitan scales with existing optical fibres. Such a system could provide encrypted communication with the highest level of physical security, but Pan notes that it could also have other applications. For example, high-fidelity entanglement could also serve as a fundamental building block for constructing quantum repeaters and scaling up quantum networks.

Carlos Sabín, a physicist at the Autonomous University of Madrid (UAM), Spain, who was not involved in the study, says that while the work is an important step, there is still a long way to go before we are able to perform completely secure and error-free quantum key distribution on an inter-city scale. “This is because quantum entanglement is an inherently fragile property,” Sabín explains. “As light travels through the fibre, small losses accumulate and the entanglement generated is of poorer quality, which translates into higher error rates in the cryptographic keys generated. Indeed, the results of the experiment show that errors in the key range from 3% when the distance is 11 km to more than 7% for 100 km.”

Pan and colleagues now plan to add more atoms to each node and to use techniques like tweezer arrays to further enhance both the entangling rate and the secure key rate over longer distances. “We are aiming for 1000 km, over which we hope to incorporate quantum repeaters,” Pan tells Physics World. “By using processes like ‘entanglement swapping’ to connect a series of such two-node entanglement, we anticipate that we will be able maintain a similar entangling rate for much longer distances.”

Todd McNutt: how an AI software solution enables creation of the best possible radiation treatment plans

Todd McNutt is a radiation oncology physicist at Johns Hopkins University in the US and the co-founder of Oncospace, where he led the development of an artificial intelligence (AI)-powered tool that simultaneously accelerates radiation planning and elevates plan quality and consistency. The software, now rebranded as Plan AI and available from US manufacturer Sun Nuclear, draws upon data from thousands of previous radiotherapy treatments to predict the lowest possible dose to healthy tissues for each new patient. Treatment planners then use this information to define goals that streamline and automate the creation of a best achievable plan.

Physics World’s Tami Freeman spoke with McNutt about the evolution of Oncospace and the benefits that Plan AI brings to radiotherapy patients and cancer treatment centres.

Can you describe how the Oncospace project began?

Back in 2007, several groups were discussing how we could better use clinical data for discovery and knowledge generation. I had several meetings with folks at Johns Hopkins, including Alex Szalay who helped develop the Sloan Digital Sky Survey. He built a large database of galaxies and stars and it became a huge research platform for both amateur and professional astronomers.

From that discussion, and other initiatives, we looked at moving towards structured data collection for patients in the clinical environment. By marrying these data with radiation treatment plans we could study how dose distributions across the anatomy affect patient outcomes. And we took that opportunity to build a database for radiotherapy.

What inspired the transition from academic research to founding the company Oncospace Inc in 2019?

After populating the database with data from many patients, we could examine which anatomic features impact our ability to generate a plan that minimizes radiation dose to normal tissues while treating target volumes as best as possible. We came up with a feature set that characterized the relationships between normal anatomy and targets, as well as target complexity.

This early work allowed us to predict expected doses from these shape-relationship features, and it worked well. At that point, we knew we could tap into this database and generate a prediction that could help create treatment plans for new patients. We thought of this as personalized medicine: for the first time, we could see the level of treatment plan quality that we could achieve for a specific patient.

I thought that this was useful commercially and that we should get it out to other clinics. Praveen Sinha, who I’d known from my previous work at Philips and now leads Sun Nuclear’s software business line, asked if I wanted to create a startup. The timing was right for both of us and I had a team here ready to go, so we went ahead and did it. With his knowledge of startups and my knowledge of what we wanted to achieve, we had perfect timing and a perfect group to work with.

Plan AI enables both predictive planning and peer review, how do these functions work?

The idea behind predictive planning is that, for a given patient, I can predict the expected dose that I should be able to achieve for them.

Plan AI software
Dose–volume histograms

Treatment planning involves specifying dosimetric objectives to the planning system and asking it to optimize radiation delivery to meet these. But nobody really knows what the right objectives even are – it is just a trial-and-error process. Plan AI’s prediction provides a rational set of objectives for plan optimization, allowing the planning system’s algorithm to move towards a good solution and making treatment planning an easier problem to solve.

Peer review involves a peer physician looking at every treatment plan to evaluate it for quality and safety. But again, people don’t really know the level of quality you can generate, it depends on the patient’s anatomy. Providing a predicted dose with clinical dose goals enables a rapid review to see whether it is a high-quality plan or not.

In the past we looked at simple things like whether a contour is missing slices or contains discontinuities and Plan AI checks for this, but you can do far more with AI. For example, you could look at all the contoured rectums in the system and predict if your contour goes too far into the sigmoid colon, then it may be mis-contoured. We have research software that can flag such potential anomalies so they don’t get overlooked.

The Plan AI models are developed using Oncospace’s database of previous treatments; can you describe this data lake?

When we first started, we developed a large SQL database containing all the shape-relationship features and dosimetry features. The SQL language is ideal for being able to query and sift through the data, but when the company was formed, we recognized that there was some age to that technology.

So for the Plan AI data lake, we extracted all the different shape-relationship and shape-complexity features and put them into a Parquet database in the cloud. This made the data lake much more amenable to applying machine learning algorithms to it. The SQL data lake at Johns Hopkins is maintained separately and primarily used to investigate toxicity predictions and spatial dose patterns. But for Plan AI, the models are fixed and streamlined for the specific task of dose prediction.

What does the model training process entail?

One of the first tasks was to curate the data, using the AAPM’s standardized structure-naming model. Our data scientist Julie Shade wrote some tools for automatic name mapping and target identification; that helped us process much larger amounts of data for the model.

Once we had all the shape-relationship and shape-complexity features and all the doses, we trained the models by anatomical region. We have FDA-approved models for the male and female pelvis, thorax, abdomen and head-and-neck. For each of these, we predict the doses for every organ-at-risk. Then we used a five-fold validation model to make sure that the predictions were good on an internal data set.

We also performed external validation at institutions including Johns Hopkins and Montefiore hospitals. We created predicted plans from recent treatment plans that had been evaluated by physicians. For almost all cases, both plan quality and plan efficiency were improved with Plan AI.

One aspect of this training is that whenever we drive optimization via predictive planning we want to push towards the best achievable dose. Regular machine learning predicts an expected, or average, dose across all patients. But you never want to drive a treatment plan towards the average dose, because then every plan you generate will be happy being average. Our model predicts both the average and the best achievable dose, and drives plan optimization towards the best achievable.

When implementing new technology in the clinic, it’s important to fit into the existing treatment workflow. How clinic-ready are these AI tools?

Radiation therapy is protocol-driven: we know what technique we’re going to use to treat and what our clinical dose goals are for different structures. What we don’t know is the patient-specific part of that. So for each anatomical region, we built models out of a wide range of treatment protocols, with many different types of patients, to ensure that the same prediction model works for any protocol. This means a user can use any protocol for treatment and the predictions will work, they don’t have to retrain anything. It’s ready to go out of the box, there’s a library of protocols to start with, and you can change protocols as you need for your own clinic.

The other part of being clinic-ready is aligning with the way that planning is currently performed, which is using dose–volume histograms. Treatment plans are optimized by manipulating these dose objectives, and that’s exactly what we predict. So users aren’t changing the whole paradigm of how planners operate. They still use their treatment planning system (TPS) – we just put the objectives in there. Basically, a TPS script sends the patient’s CT and contours to the cloud, where Plan AI makes the predictions. The TPS then pulls back in the objectives built from the models, based on this specific patient’s anatomy. The TPS runs the optimization and, as a last step, can send the plan back to Plan AI to check that it fits within the best achievable predictions.

Did you encounter any challenges bringing AI into a clinical setting?

Interestingly, the challenges aren’t technical, they are more human related. One of the more systemic challenges is data security when using medical data for training. A nice thing about our system is that the features we generate from treatment plans are just mathematical shape-relationship features and don’t involve a lot of identifiable information.

AI has been used in radiation therapy for image contouring and auto-segmentation, and early efforts were not so good. So, there’s always a good, healthy scepticism. But once you show people that it works and works well, this can be overcome. I have seen some people worried about job security and AI taking over. We are medical professionals designing a treatment plan to care for a patient and there’s a lot of pride and art in that – if you automate that, it takes away some of this pride and art.

I tell people that if we automate the easier things, then they can spend their quality time on the more difficult and challenging cases, because that’s where their talent might be needed more.

Do you have any advice for clinics looking to adopt AI-driven planning?

Introduce it as an assistant, not as a solution. You want people that already know what they’re doing to be able to use their knowledge more efficiently. We want to make their jobs easier and show them that it also improves quality.

With dosimetrists, for example, they create a plan and work hard getting the dose down – and then the physician looks at it and suggests that they can do better. Predictive planning gives them confidence that they are right and takes the uncertainty out of the physician review process. And once you’ve gained that level of confidence, you can start using it for adaptive planning or other technologies.

Where do you see predictive modelling and AI in oncology in five years from now?

Right now, there’s been a lot of data collected, but we want that data to advance and learn. Having multiple centres adding to this pool of knowledge and being able to continually update those models from new, broader data sets could be of huge value.

In terms of patient outcomes, we’ve done a lot of the work looking at how the spatial pattern of dose impacts toxicity and outcomes. This is part of the research being performed at Johns Hopkins and still in discovery mode. But down the road, some of these predictions of normal tissue outcomes could be fed into the planning process to help reduce toxicity at the patient level.

Finally, what’s been the most rewarding part of this journey for you?

During my prior experience building treatment planning systems, the biggest problem was always that nobody knew what the objective was. Nobody knew how to tell the system: “this is the dose I expect to receive, now optimize to get it for me”, because you didn’t know what you could do. For any given patient, you could ask for too much or too little. Now, for the first time, I argue that we actually know what our objective is in our treatment planning.

This levels the playing field between different environments, different countries, or even different dosimetrists with different levels of experience. The Plan AI tool brings all this to a consistent state and enables high quality, efficient planning everywhere. We can provide this predictive planning tool to clinics around the world. Now we just have to get everybody using it.

 

The future of particle physics: what can the past teach us?

In his opening remarks to the 4th International Symposium on the History of Particle Physics, Chris Llewellyn Smith – who was a director-general of CERN in the 1990s – suggested participants should speak about “what’s not written in the journals”, including “mistakes, dead-ends and problems with getting funding”. Doing so, he said, would “provide insight into the way science really progresses”.

The symposium was not your usual science conference. Held last November at CERN, it took place inside the lab’s 400-seat main auditorium, which has been the venue for many historic announcements, including the discovery of the Higgs boson. Its brown-beige walls are covered with lively designs by the Finnish artist Ilona Rista, suggesting to me the aftermath of a collision of high-energy bar codes.

The 1980s and 1990s saw the construction and operation of various important accelerators and detectors.

The focus of the meeting was the development of particle physics in the 1980s and 1990s – a period that saw the construction and operation of various important accelerators and detectors. At CERN, these included the UA1 and UA2 experiments at the Super Proton Synchrotron, where the W and Z bosons were discovered. Later, there was the Large Electron-Positron Collider (LEP), which came online in 1989, and the Large Hadron Collider (LHC), approved five years later.

Delegates also heard about the opening of various accelerators in the US during those two decades, including two at the Stanford Linear Accelerator Center – the Positron-Electron Project in 1980 and the Stanford Linear Collider in 1989. Most famous of all was the start-up of the Tevatron at Fermilab in 1983. Over at Dubna in the former Soviet Union, meanwhile, scientists built the Nuclotron, a superconducting synchrotron, which opened in 1992.

Conference speakers covered unfinished machines of the era as well. The US cancelled two proton–proton facilities – ISABELLE in 1983 and the Superconducting Super Collider (SSC) a decade later. The Soviet Union, meanwhile, abandoned the multi-TeV proton–proton collider UNK a few years later, though news has recently emerged that Russia might revive the project.

Several speakers recounted the discovery of the W and Z particles at CERN in 1983 and the discovery of the top quark at Fermilab in 1995. Others addressed the strange fact that fewer neutrinos from the Sun had been detected than theory suggested. The “solar-neutrino problem”, as it was known, was finally resolved by Takaaki Kajita’s discovery of neutrino oscillation in 1998, for which he shared the 2015 Nobel Prize for Physics with Art McDonald.

The conference also addressed unsuccessful searches for proton decay, axions, magnetic monopoles, the Higgs boson, supersymmetry particles and other targets. Other speakers described projects with highly positive outcomes, such as the advent of particle cosmology, or what some have jokingly dubbed “the heavenly lab”. The development of string theory, grand unified theories and perturbative quantum chromodynamics was tackled too.

In an exchange in the question-and-answer session after one talk, the Greek physicist Kostas Gavroglu referred to many of such quests as “failures”. That remark prompted the Australian-born US theoretical physicist Helen Quinn to say she preferred the term “falling forward”; such failures, she said, were instances of “I tried this, and it didn’t work so I tried that”.

In relating his work on detecting gravitational waves, the US Nobel-prize-winning physicist Barry Barish said he felt his charge was not to celebrate the importance of his discoveries nor the ingenuity of the route he took. Instead, Barish explained, his job was to answer the much more informal question: “What made me do what?”.

His point was illustrated by the US theorist Alan Guth, who described the very human and serendipitous path he took to working on cosmic inflation – the super-fast expansion of the universe just after the Big Bang. When he started, Guth said, “all the ingredients were already invented”. But the startling idea of inflation hinged on accidental meetings, chance conversations, unexpected visits, a restricted word count for Physical Review Letters, competitions, insecurities and “spectacular realizations” coalescing.

Wider world

Another theme that arose in the conference was that science does not unfold inside its own bubble but can have extensive and immediate impacts on the world around it. Two speakers, for instance, recounted the invention of the World Wide Web at CERN in the late 1980s. It’s fair to say that no other discovery by a single individual – Tim Berners-Lee – has so radically and quickly transformed the world.

The growing role of international politics in promoting and protecting projects was mentioned too, with various speakers explaining how high-level political negotiations enabled physicists to work at institutions and experiments in other nations. The Polish physicist Agnieszka Zalewska, for example, described her country’s path to membership in CERN, while Russian-born US physicist Vladimir Shiltsev spoke about the “diaspora” of Russian particle physicists after the fall of the Soviet Union in 1991.

As a result of the Superconducting Super Colllider’s controversial closure, the centre of gravity of high-energy physics shifted to Europe.

Sometimes politics created destructive interference. The US physicist, historian and author Michael Riordan described how the US’s determination to “go it alone” to outcompete Europe in high-energy physics was a major factor in bringing about the opposite: the termination of the SSC in 1993. As a result of that project’s controversial closure, the centre of gravity of high-energy physics shifted to Europe.

Indeed, contemporary politics occasionally hit the conference itself in incongruous and ironic ways. Two US physicists, for example, were denied permission to attend because budgets had been cut and travel restrictions increased. In the end, one took personal time off and paid his own way, leaving his affiliation off the programme.

Before the conference, some people complained that conference organizers hadn’t paid enough attention to physicists who’d worked in the Soviet Union but were from occupied republics. Several speakers addressed this shortcoming by mentioning people like Gersh Budker (1918–1977). A Ukrainian-born physicist who worked and died in the Soviet Union, Budker was nominated for a Nobel Prize (1957) and even has had a street named after him at CERN. Unmentioned, though, was that Budker was Jewish and that his father was killed by Ukrainian nationalists in a pogrom.

On the final day of the conference, which just happened to be World Science Day for Peace and Development, CERN mounted a public screening of the 2025 documentary film The Peace Particle. Directed by Alex Kiehl, much of it was about CERN’s internationalism, with a poster for the film describing the lab as “Mankind’s biggest experiment…science for peace in a divided world”.

But in the Q&A afterwards some audience members criticized CERN for allegedly whitewashing Russia for its invasion of the Ukraine and Israel for genocide. Those onstage defended CERN on the grounds of its desire to promote internationalism.

The critical point

The keynote speaker of the conference was John Krige, a science historian from Georgia Tech who has worked on a three-volume history of CERN. Those who launched the lab, Krige reminded the audience, had radical “scientific, political and cultural aspirations” for the institution. Their dream was that CERN wouldn’t just revive European science and promote regional collaborative effects after the Second World War, but also potentially improve the global world order too.

Krige went on to quote one CERN founder, who’d said that international science facilities such as CERN would be “one of the best ways of saving Western civilization”. Recent events, however, have shown just how fragile those ambitions are. Alluding to CERN’s Future Circular Collider and other possible projects, Llewellyn Smith ended his closing remarks with a warning.

“The perennial hope that the next big high-energy project will be genuinely global,” he said, “seems to be receding over the horizon due to the polarization of world politics”.

Copyright © 2026 by IOP Publishing Ltd and individual contributors