The heads of university physics departments in the UK have published an open letter expressing their “deep concern” about funding changes announced late last year by UK Research and Innovation (UKRI), the umbrella organization for the UK’s research councils.
Addressed to science minister Patrick Vallance, the letter says the cuts are causing “reputational risk” and calls for “strategic clarity and stability” to ensure that UK physics can thrive.
It has so far been signed by 58 people who represent 45 different universities, including Birmingham, Bristol, Cambridge, Durham, Imperial College, Liverpool, Manchester and Oxford.
The letter says that the changes at UKRI “risk undermining science’s fundamental role in improving our prosperity, health and quality of life, as well as delivering sustainable growth through innovation, productivity and scientific leadership”.
The signatories warn that the UK’s international standing in physics is “a strategic asset” and that areas such as particle physics, astronomy and nuclear physics are “especially important”.
Raising concerns
The decision by the heads of physics to write to Vallance comes in the wake of UKRI stating in December that it will be adjusting how it allocates government funding for scientific research and infrastructure.
The Science and Technology Facilities Council (STFC), which is part of UKRI, stated that projects would need to be cut given inflation, rising energy costs as well as “unfavourable movements in foreign exchange rates” that have increased STFC’s annual costs by over £50m a year.
The STFC noted that it would need to reduce spending from its core budget by at least 30% over 2024/2025 levelswhile also cutting the number of projects financed by its infrastructure fund.
The council has already said two UK national facilities – the Relativistic Ultrafast Electron Diffraction and Imaging facility and a mass spectrometry centre dubbed C‑MASS – will now not be prioritized.
In addition, two international particle-physics projects will not be supported: a UK-led upgrade to the LHCb experiment at CERN as well as a contribution to the Electron-Ion Collider at the Brookhaven National Laboratory that is currently being built.
Philip Burrows, director of the John Adams Institute for Accelerator Science at the University of Oxford, who is one of the signatories of the letter, told Physics World that the cuts are “like buying a Formula-1 car but not being able to afford the driver”.
Burrows admits that the STFC has been hit “particularly hard” by its flat-cash settlement, given that a large fraction of its expenditure pays the UK’s subscriptions to international facilities and operating the UK’s flagship national facilities.
But because most of the rest of the STFC’s budget supports scientists to do research at those facilities, he is concerned that the funding cuts will fall disproportionately on the science programme.
“Constraining these areas risks weakening the very talent pipeline on which the UK’s innovation economy depends,” the letter states. “Fundamental physics also delivers substantial public engagement and cultural impact, strengthening public support for science and reinforcing the UK’s reputation as a global scientific leader.”
The signatories also say they are “particularly concerned” about the UK’s capacity to lead the scientific exploitation of major international projects. “An abrupt pause in funding for key international science programmes risks damaging UK researchers’ competitive advantage into the 2040s,” they note.
The letter now calls on the government to work with UKRI and STFC to “stabilize” curiosity-driven grants for physics within STFC “at a minimum of flat funding in real terms” as well as protect postdocs, students and technicians from the cuts.
It also calls on the UK to develop a long-term strategy for infrastructure and call on the government to address facilities cost pressures through “dedicated and equitable mechanisms so that external shocks do not singularly erode the UK’s research base in STFC-funded research areas”.
The news comes as Michele Dougherty today formally stepped down from her role as IOP president. Dougherty, who also holds the position of executive chair of the STFC, had previously stepped back from presidential duties on 26 January due to a conflict of interest.
Paul Howarth, who has been IOP president-elect since September, will now become IOP president.
The Earth’s magnetic poles have reversed 540 times over the past 170 million years. Usually, these reversals are relatively speedy in geological terms, taking around 10,000 years to complete. Now, however, scientists in the US, France and Japan have found evidence of much slower reversals deep in Earth’s geophysical past. Their findings could have important implications for our understanding of Earth’s climate and evolutionary history.
Scientists think the Earth’s magnetic field arises from a dynamo effect created by molten metal circulating inside the planet’s outer core. Its consequences include the bubble-like magnetosphere, which shields us from the solar wind and cosmic radiation that would otherwise erode our atmosphere.
From time to time, this field weakens, and the Earth’s magnetic north and south poles switch places. This is known as a geomagnetic reversal, and we know about it because certain types of terrestrial rocks and marine sediment cores contain evidence of past reversals. Judging from this evidence, reversals usually take a few thousand years, during which time the poles drift before settling again on opposite sides of the globe.
Looking into the past
Researchers led by Yuhji Yamamoto of Kochi University, Japan and Peter Lippert at the University of Utah, US, have now identified two major exceptions to this rule. Drawing on evidence obtained during the Integrated Ocean Drilling Program expedition in 2012, they say that around 40 million years ago, during the Eocene epoch, the Earth experienced two reversals that took 18,000 and 70,000 years.
The team based these findings on cores of sediment extracted off the coast of Newfoundland, Canada, up to 250 metres below the seabed. These cores contain crystals of magnetite that were produced by a combination of ancient microorganisms and other natural processes. The iron oxide particles within these crystals align with the polarity of the Earth’s magnetic field at the time the sediments were deposited. Because marine sediments are far less affected by erosion and weathering than sediments onshore, Yamamoto says the information they preserve about past Earth environments – including geomagnetic conditions – is exceptionally clean.
Significance for evolutionary history
The team says the difference between a geomagnetic reversal that takes 10,000 years and one that takes 70,000 years is significant because prolonged intervals of weaker geomagnetic fields would have exposed the Earth to higher amounts of cosmic radiation for longer. The effects on living creatures could have been devastating, says Lippert. As well as higher rates of genetic mutations due to increased radiation, he points out that organisms from bacteria to birds use the Earth’s magnetic field while navigating. “A lower strength field would create sustained pressures on these organisms to adapt,” he says.
If humans had existed at the time of these reversals, the effects on our species could have been similarly profound. “Modern humans (Homo sapiens) are thought to have begun dispersing out of Africa only about 50,000 years ago,” Yamamoto observes. “If a geomagnetic reversal can persist for a period comparable to – or even longer than – this timescale, it implies that the Earth’s environment could undergo substantial and continuous change throughout the entire period of human evolution.”
Although our genetic ancestors dodged that particular bullet, Yamamoto thinks the team’s findings, which are published in Nature Communications Earth & Environment, offer a valuable perspective on how evolution and environmental change could interact in the future. “This period corresponds to an epoch when Earth was far warmer than it is today, and when Greenland is thought to have been a truly ‘green land’,” he explains. “We also know that atmospheric CO₂ concentrations during this era were comparable to levels projected for the end of this century, making it an important ‘climate analogue’ for understanding near‑future climate conditions.”
The discovery could also have more direct implications for future life on Earth. The magnitude of the Earth’s magnetic field has decreased by around 5% in each century since records began. This decrease, combined with the slow drift of our current magnetic North Poletowards Siberia, could indicate that we are in the early stages of a new geomagnetic reversal. Re‑evaluating the duration of such reversals is thus not only an issue for geophysicists, Yamamoto says. It’s also an important opportunity to reconsider fundamental questions about how we should coexist with our planet and how we ought to confront a continually changing environment.
Motivation for future studies
John Tarduno, a geophysicist at the University of Rochester, US, who was not involved in the study, describes it as “outstanding” work that “documents an exciting discovery bearing on the nature of magnetic shielding through time and the geomagnetic reversal process”. He agrees that reduced shielding could have had biotic effects, and adds that the discovery of long reversal transitions could influence scientific thinking on the statistics of field reversals – including questions of whether the field retains some “memory” of previous events. “This new study will provide motivation to examine reversal transitions at very high resolution,” Tarduno says.
For their next project, Yamamoto and colleagues aim to use sequences of lava flows in Iceland to analyse how the Earth’s magnetic field evolved. Lippert’s team, for its part, will be studying features called geomagnetic excursions that appear in both deep sea and terrestrial sediments. Such excursions are evidence of short-lived, incomplete attempts at field reversals, and Lippert explains that they can be excellent stratigraphic markers, helping scientists correlate records on geological timescales and compare them with samples taken from different parts of the world. “Excursions, like long reversals, can inform our understanding of what ultimately causes a geomagnetic field reversal to start and persist to completion,” he says.
Fusion adopter Debbie Callahan is chief strategy officer at Focused Energy. (Courtesy: Focused Energy)
With the world’s energy demands increasing, and our impact on the climate becoming ever clearer, the search is on for greener, cleaner energy production. That’s why research into fusion energy is undergoing something of a renaissance.
Construction of the International Thermonuclear Experimental Reactor (ITER) in France – the world’s largest fusion experiment – is currently under way, while there are numerous other large-scale facilities and academic research projects too. There has also been a rise in the number of smaller commercial companies joining the race.
One person at the forefront of fusion research is Debbie Callahan – a plasma physicist who spent 35 years working at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory in the US. She is now chief strategy officer at Focused Energy, a laser-fusion firm based in Germany and California, which is trying to generate energy from the laser-driven fusion of hydrogen isotopes.
Callahan recently talked to Physics World online editor Hamish Johnston about working in the fusion sector, Focused Energy’s research and technology, and the career opportunities available. The following is an edited extract of their conversation, which you can hear in full on the Physics World Weekly podcast.
How does NIF’s approach to fusion differ from that taken by magnetic confinement facilities such as ITER?
To get fusion to happen, you need three elements that we sometimes call the triple product. You need a certain amount of density in your plasma, you need temperature, and you need time. The product of those has to be over a certain value.
Magnetic fusion and inertial fusion are kind of the opposite of each other. In a magnetic fusion system like ITER, you have a low-density plasma, but you hold it for a long time. You do that by using magnetic fields that trap the plasma and keep it from escaping.
In inertial fusion – like at NIF – it’s the opposite. You don’t hold the plasma together at all, it’s only held by its own inertia, and you have a very high density for a short time. In both cases, you can make fusion happen.
What is the current state of the art at NIF, in terms of how much energy you have to put in to achieve fusion versus how much you get out?
To date, the best shot at NIF – by which I mean an individual, high-energy laser bombardment of the target capsule – occurred during an experiment in April 2025, which had a target gain of about 4.1. That means that they got out 4.1 times the amount of energy that they put in. The incident laser energy for those shots is around two megajoules, so they got out about eight megajoules.
This is a tremendous accomplishment that’s taken decades to get to. But to make inertial fusion energy successful and use it in a power plant, we need significantly higher gains of more like 50 to 100.
Captured beams The target chamber of the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory. NIF has demonstrated that inertial fusion can work with deuterium–tritium fuel, but it is a research facility not a commercial endeavour. (Courtesy: Lawrence Livermore National Laboratory/Damien Jemison)
Can you explain Focused Energy’s approach to fusion?
Focused Energy was founded in July 2021, and has offices in the US and Germany. Just a month later, we achieved fusion ignition, which is when the fusion fuel becomes hot enough for the reactions to sustain themselves through their own internal heating (it is not the same as gain).
At NIF lasers are fired into a small cylinder of gold or depleted uranium and the energy is converted into X-rays, which then drive the capsule. It’s what’s called laser indirect drive. At Focused Energy, however, we’re directly driving the capsule. The laser energy is put directly on the capsule, with no intermediate X-rays.
The advantage of this approach is that converting laser energy to X-rays is not very efficient. It makes it much harder to get the high target gains that we need. At Focused Energy, we believe that direct drive is the best option for fusion energy to get us to a gain of over 50.
So is boosting efficiency one of your key goals to make fusion practical at an industrial level?
Yes, exactly. You have to remember that NIF was funded for national security purposes, not for fusion energy. It wasn’t designed to be a power plant – the goal was just to generate fusion energy for the first time.
In particular, the laser at NIF is less than 1% efficient but we believe that for fusion power generation, the laser needs to be about 10% efficient.
So one of the big thrusts for our company is to develop more efficient lasers that are driven by diodes – called diode pump solid state lasers.
Can you tell us about Focused Energy’s two technologies called LightHouse and Pearl Fuel?
LightHouse is our fusion pilot plant. When operational, it will be the first power plant to produce engineering gain greater than one, meaning it will produce more energy than it took to drive it. In other words, we’ll be producing net electricity.
For NIF, in contrast, gain is the amount of energy out relative to the amount of laser energy in. But the laser is very inefficient, so the amount of electricity they had to put in to produce that eight megajoules of fusion energy is a lot.
Meanwhile, Pearl is the capsule the laser is aimed at in our direct drive system. It’s filled with deuterium–tritium fuel derived from sea water and lithium.
Rejuvenating nuclear A rendering of Focused Energy’s proposed fusion power plant at the Biblis fission power plant in Germany, which was shut down in 2011. (Courtesy: Focused Energy)
How do you develop the capsule to absorb the laser energy and give as much of it to the fuel as possible?
The development of the capsule for a fusion power plant is quite complicated. First, we need it to be a perfect sphere so it compresses spherically. The materials also need to efficiently absorb the laser light so you can minimize the size of that laser.
You have to be able to cheaply and quickly mass produce these targets too. While NIF does 400 shots per year, we will need to do about 900,000 shots a day – about 10 per second. We’ll also have to efficiently remove the exploded target material from the reactor chamber so that it can be cleared for the next shot.
It’s a very complicated design that needs to bring together all the pieces of the power plant in a consistent way.
When you are designing these elements, what plays a bigger role – computer simulations or experiments?
Computer simulations play a large part in developing these designs. But one of the lessons that I learned from NIF was that, although the simulation codes are state of the art, you need very precise answers, and the codes are not quite good enough – experimental data play a huge role in optimizing the design. I expect the same will be true at Focused Energy.
A third factor that’s developing is artificial intelligence (AI) and machine learning. In fact, at Livermore, a project working on AI contributed to achieving gain for the first time in December 2022. I only see AI’s role in fusion getting bigger, especially once we are able to do higher repetition rate experiments, which will provide more training data.
What intellectual property (IP) does Focused Energy have in addition to that for the design of the Pearl target and the LightHouse plant?
We also have IP in the design of the lasers – they are not the same lasers as used at NIF. And I think there’ll be a lot of IP around how we fabricate the targets. After all, it’s pretty complicated to figure out how to build 900,000 targets a day at a reasonable cost.
We’ll see a lot of IP coming out of this project in those areas, but there’s also the act of putting it all together. How we integrate these things in order to make a successful plant is important.
What are the challenges of working with deuterium and tritium as materials for fusion?
We chose deuterium and tritium because they are the easiest elements to fuse, and have been successfully demonstrated as fusion fuel by NIF.
Deuterium can be found naturally in sea water, but getting tritium – which is radioactive – is more complicated. We breed it from lithium. Our reactor designs have lithium in them, and the neutrons from the fusion reactions breed the tritium.
Making sure that we have enough tritium, and figuring out how to extract that material to use it for future shots, is a big task. We have to be able to breed enough tritium to keep the plant going.
To work on this, we have a collaboration funded by the US Department of Energy to work with Savannah River National Lab in South Carolina. They have a lot of expertise in designing these tritium-extraction systems.
How will you capture the heat from the deuterium–tritium fusion reaction?
We will use a conventional steam cycle to convert the heat into electricity. It’s funny – we’ll have this very hi-tech way of producing heat, but at the end of the day, we will use a traditional system to produce the electricity from that heat.
So what’s the timeline on development?
Our plan is to have a pilot plant up by the end of the 2030s. It’s a fairly aggressive timeline given the things that we have to do. But that’s part of being a start-up – we have to take some risks and try to move quickly to achieve our goal.
To help that we have, in my view, a superpower – we have one foot in Europe and one foot in the US. There are a lot of opportunities between the two continents to partner with other companies, universities and governments. I think that makes us strong because we have access to some of the best talent from around the world.
How does working at Focused Energy compare with life as an academic at Lawrence Livermore?
There are a lot of similarities. My role now is to bring the knowledge and skills I learned at NIF to Focused Energy, so it’s been a natural transition.
In fact, there was a lot of pressure working at NIF. We were trying to move very quickly, so it’s actually very similar to working in a start-up like Focused Energy.
One of the big differences is the level of bureaucracy. Working for a government-funded lab meant there were lots of rules and paperwork, which takes up your time and you don’t always see the value in it.
In contrast, working for a small start-up means we can move more quickly because we don’t have as many of those kinds of constraints. Personally, I find that great because it leaves more time for the fun and interesting things – like trying to get fusion on the grid.
Are you still involved in academic research in any way?
As a firm, we are still out there collaborating with academics. Last year, for example, we gave four separate presentations at the American Physical Society Division of Plasma Physics meeting.
Active collaboration Debbie Callahan presenting the work of Focused Energy at the IEEE Pulsed Power and Plasma Science Conference in Berlin in June 2025. (Courtesy: Focused Energy)
I feel very strongly about peer review. Of course, publishing isn’t our number one priority, but we need feedback from others. We’re trying to do something that no-one’s done before, so it’s important to have our colleagues give us feedback on what we’re doing, point out mistakes we’re making or things we’re forgetting.
Working with universities and national labs in both Europe and the US is vital. Communicating with others in the field is important for us to get to where we want to go.
And of course, being an active part of the fusion community is good for recruitment too. We regularly give presentations at conferences that students attend. We meet those students and they learn about our work – and they might be future employees for our company.
What’s your advice for early-career physicists keen on joining the fusion industry?
There are so many opportunities right now, especially compared to the start of my career when the work was mainly just at universities or national labs. Nowadays, there are a lot of companies in the sector. Not all of them will survive because there’s only so much money, but there are still lots of opportunities. If you’re interested in fusion energy, go for it.
The field is always developing. There’s new stuff happening every day – and new problems. So if you like problem-solving, it’s great, especially if you want to do something good for the world.
There are also opportunities for people who are not plasma physicists. At Focused Energy we have people across so many fields – those who work on lasers, others who work on reactor design, some developing the AI and machine learning, and those who work on target physics, like me. To achieve fusion energy, we need physicists, engineers, mathematicians and computer scientists. We need researchers, technicians and operators. There’s going to be tremendous growth in this sector.
This winter in Bristol has been even gloomier than usual – so I was really looking forward to the Bristol Light Festival 2026. We went on the last evening of the event (28 February) and we were blessed with dry weather and warmish temperatures.
The festival featured 10 illuminated installations that were scattered throughout Bristol and the crowds were out in force to enjoy them. I wasn’t expecting to be thinking about physics as I wandered through town, but that’s exactly what I found myself doing at an installation called The Midnight Ballet by the British sculptor Will Budgett. Rather appropriately, it was located next to the HH Wills Physics Laboratory at the University of Bristol.
The display comprises seven sculptures that are illuminated from two different directions. The result is two very different images of ballerinas projected onto two screens (see image).
Art and science
So, why was I thinking about physics while admiring the work? To me the pieces embody – in a purely artistic way – the idea of superposition and measurement in quantum mechanics. A sculpture is capable of producing two different images (a superposition of states), but neither of these images is observable until a sculpture is illuminated from specific directions (the measurements).
Now, I know that this analogy is far from perfect. Measurements can be made simultaneously in two orthogonal planes, for example. But, Budgett’s beautiful artworks really made me think about quantum physics. Given the exhibit’s close proximity to the university’s physics department, I suspect I am not the only one.
In 1942 physicists in Chicago, led by Enrico Fermi, famously produced the world’s first self-sustaining nuclear chain reaction. But it was to be another nine years before electricity was generated from fission for the first time. That landmark event occurred in 1951 when the Experimental Breeder Reactor-I in southern Idaho powered a string of four 200-watt light bulbs.
Our ability to harness nuclear power has been under constant development since then. In fact, according to the Nuclear Energy Association, a record 2667 terrawatt-hours of electricity was generated by nuclear reactors around the world in 2024 – up 2.5% on the year before. But what, I wonder, is the potential of nuclear-powered transport?
A “nuclear engine” has many advantages, notably providing a vehicle with an almost unlimited supply of onboard power, with no need for regular refuelling. That’s particularly attractive for large ships and submarines, where fuel stops at sea are few and far between. It’s even better for space craft, which cannot refuel at all.
The downside is that a vehicle needs to be fairly large to carry even a small nuclear fission reactor – plus all the heavy shielding to protect passengers onboard. Stringent safety requirements also have to be met. If the vehicle were to crash or explode, the shield around the reactor needs to stay fully intact.
Ships and planes
Perhaps the best known transport application of nuclear power is at sea, where it’s used for warships, submarines and supercarriers. The world’s first nuclear-powered ship was the US Navy submarine Nautilus, which was launched in 1954. As the first vessel to have a nuclear reactor for propulsion, it revolutionized naval capabilities.
Compared to oil or coal-fired ships, nuclear-powered vessels can travel far greater distances. All the fuel is in the reactor, which means there is no need for additional fuel be carried onboard – or for exhaust chimneys or air intakes. Even better, the fuel is relatively cheap. But operating and infrastructure costs are steep, which is why almost all nuclear-powered marine vessels belong to the military.
There have, however, been numerous attempts to develop other forms of nuclear-powered transport. While a nuclear-powered aircraft might seem unlikely, the idea of flying non-stop to the other side of the world, without giving off any greenhouse-gas emissions, is appealing. Incredible as it might seem, airborne nuclear reactors were actually trialled in the mid-1950s.
That was when the United States Air Force converted a B-36 bomber to carry an operational air-cooled reactor, weighing around 18 tons. The aircraft was not actually nuclear powered but it was operated in this configuration to assess the feasibility of flying a nuclear reactor. The aircraft made a total of 47 flights between July 1955 and March 1957.
In 1955 the Soviet Union also ran a project to adapt a Tupolev Tu-95 “Bear” aircraft for nuclear power. However, because of the radiation hazard to the crew and the difficulties in providing adequate shielding, the project was soon abandoned. Neither the American or the Soviet atomic-powered aircraft ever flew and – because the technology was inherently dangerous – it was never considered for commercial aviation.
Cars and trains
The same fate befell nuclear-powered trains. In 1954 the US nuclear physicist Lyle Borst, then at the University of Utah, proposed a 360-tonne locomotive carrying a uranium-235 fuelled nuclear reactor. Several other countries, including Germany, Russia and the UK, also had schemes for nuclear locos. But public concerns about safety could not be overcome and nuclear trains were never built. The $1.2m price tag of Borst’s train didn’t help either.
Nuclear nightmare Ford’s Nucleon car thankfully never got past the concept stage.
In the late 1950s, meanwhile, there were at least four theoretical nuclear-powered “concept cars”: the Ford Nucleon, the Studebaker Packard Astral, the Simca Fulgur and the Arbel Symétric. Based on the assumption that nuclear reactors would get much smaller over time, it was felt that such a car would need relatively light radiation shielding. I certainly wouldn’t have wanted to take one of those for a spin; in the end none got beyond concept stage.
Perhaps the real success story of nuclear propulsion has been in space
But perhaps the real success story of nuclear propulsion has been in space. Between 1967 and 1988, the Soviet Union pioneered the use of fission reactors for powering surveillance satellites, with over 30 nuclear-powered satellites being launched during that period. And since the early 1960s, radioisotopes have been a key source of energy in space.
Driven by the desire for faster, more capable and longer duration space missions to the Moon, Mars and beyond, China, Russia and the US are now investing significantly in the next generation of nuclear reactor technology for space propulsion, where solar or radioisotope power will be inadequate. Several options are on the table.
One is nuclear thermal propulsion, whereby energy from a fission reactor heats a propellant fuel. Another is nuclear electric propulsion, in which the fission energy ionizes a gas that gets propelled out the back of the spacecraft. Both involve using tiny nuclear reactors of the kind used in submarines, except they’re cooled by gas, not water. Key programmes are aiming for in-space demonstrations in the next 5–10 years.
Where next?
Many of the first ideas for nuclear-powered transport were dreamed up little more than a decade after the first self-sustaining chain reaction. The appeal was clear: compared to other fuels, nuclear power has a high energy density and lasts much longer. It also has zero carbon emissions. Nuclear power must have seemed a panacea for all our energy needs – using it for cars and planes must have seen an obvious next step.
However, there are major safety issues to address when nuclear sources are mobilized, from protecting passengers and crew, to ensuring appropriate safeguards should anything go wrong. And today we understand all too well the legacy of nuclear systems, from the safe disposal of spent fuel to the decommissioning of nuclear infrastructure and equipment.
We’ve struck the right balance when it comes to using nuclear power, confining it to sea-faring vessels under the watchful eye of the military
Here on Earth, I think we’ve struck the right balance when it comes to using nuclear power, confining it to sea-faring vessels under the watchful eye of the military. But as human-crewed, deep-space exploration beckons, a whole new set of issues will arise. There will, of course, be lots of technical and engineering challenges.
How, for example, will we maintain, repair and decommission nuclear-powered space craft? How will we avoid endangering crews or polluting the environment especially when craft take off? Who should set appropriate legislation – and how we do we police those rules? When it comes to space, nuclear will help us “to boldly go”; but it will also require bold regulation.
Congratulations on winning the 2025 JPhys Materials Early Career Award. What does this mean for you at this stage of your career?
I am really grateful to the Editorial Board of JPhys Materials for this award and for highlighting our work. This is a key recognition for the whole team behind the results presented in this research paper. We were taking a new turn in our research with this topic – trying to convince bubbles to assemble into crystalline structures towards architected materials – and this award is an important encouragement to continue pushing in this direction. At the crossroads of physics, physical chemistry, materials science and mechanics, we hope that this is only the beginning of our interdisciplinary journey around bubble assemblies and foam-based materials.
Your research explores elasto-capillarity and foam architectures, what inspired you to work in this fascinating area?
I always say that research is a series of encounters – with people, and with scientific themes and objects. I was lucky to discover this interdisciplinary world as an undergraduate, during an internship on elasto-capillarity at the intersection of physics and mechanics. The scientific communities working on these topics – and also on foams – are fantastic. In both fields, I was fortunate to meet talented people who inspired my future work, combining scientific skills and creativity.
In France, the GDR MePhy (mechanics and physics of complex systems) played a key role in broadening my perspective, by organizing workshops on many different topics, always with interdisciplinarity in mind.
You have demonstrated mechanically guided self-assembly of bubbles leading to crystalline foam structures. What’s the significance of this finding and how could it impact materials design?
In the paper, part of the journal’s Emerging Leaders collection, we provide a proof-of-concept with alginate and polyurethane materials to demonstrate that it is possible to use a fibre array to order bubbles into a crystalline structure, which can be tuned by choosing the fibre pattern, and to keep this ordering upon solidification to provide an alternative approach to additive manufacturing. This work is mainly fundamental, and we hope it paves the way toward a wider use of mechanical self-assembly principles in the context of porous architected materials.
The use of solidifying materials for those studies is two-fold: first, it allows us to observe the systems with X-ray microtomography once solidified, and second, it demonstrates that we could use such techniques to build actual solid materials.
Guiding bubbles with fibre arrays Arrangements of bubbles constrained by a network of fibres (highlighted with red dots) can exhibit long-range order and even include Kelvin cell arrangements. (Courtesy: J. Phys. Mater. 10.1088/2515-7639/adaa21)
What excites you most about this field right now, and where do you see the biggest opportunities for breakthroughs?
Combining physical understanding and materials science is certainly a great area of opportunity to better exploit mechanical self-assembly. It is very compelling to search for strategies based on physical principles to generate materials with non-trivial mechanical or acoustic properties. Capillarity, elasticity, stimuli-induced modification of systems, as well as geometrical considerations, all offer a great playground to explore. Curiosity-driven research has many advantages, and often, unexpected observations completely reshape the trajectory that we had in mind.
Could you tell us about your team’s current research priorities and the directions you are most focused on?
We believe that focusing first on the underlying physical principles, especially in terms of mechanical self-assembly, will provide the building blocks to generate novel materials. One key research axis we are exploring now is widening the range of materials that can be used for “liquid foam templating” (a general approach that involves controlling the properties of a foam in its liquid state to control the resulting properties of the foam after solidification). We focus on the solidification mechanisms, either by playing with external stimuli or by controlling the solidification reactions via the introduction of catalysts or solidifying agents.
What are the key challenges in achieving ordered structures during solidification?
Liquid foams provide beautiful hierarchical structures that are also short-lived. To take advantage of the mechanical self-assembly of bubbles to build solid materials, understanding the relevant timescales is key: depending on whether the foam has time to drain and destabilize before solidification or not, its final morphology can be completely different. Controlling both the ageing mechanisms and the solidification of the matrix is particularly challenging.
How do you see foam-based materials impacting real-world applications?
Both biomedical devices and soft robots often rely on soft materials – either to match the mechanical properties of biological tissues or to provide the mechanical properties to build soft robots to enable motion. Being able to customize self-assembled hierarchical structures could allow us to explore a wider range of even softer materials, with specific properties resulting from their structural features. Applications could also extend to stiffer materials, mainly in the context of acoustic properties and wave propagation in such architected structures.
What are the most surprising behaviours you have observed during the processes of self-assembly and solidification of foams?
For the experiments detailed in the paper, the structures revealed their beauty once the X-ray tomography scans were performed. When we varied the parameters, we could only guess what was going to happen before getting the visual confirmation a few hours later. We were really happy to see that changing the pattern of the fibre array could indeed provide different ordered foam structures. In some other projects we are working on, foam stability has been a real challenge. We were sometimes surprised to obtain long-lasting liquid systems.
Creating order X-ray tomography scans of foams without a fibre array (left), showing a disordered structure, and with a square fibre array (right), showing large ordered zones. (Courtesy: J. Phys. Mater. 10.1088/2515-7639/adaa21)
Looking ahead, what are the next big questions you hope to tackle in your field?
In the fundamental context of the physics and mechanics of elasto-capillarity, the study of model systems involving self-assembly mechanisms will be a key aspect of our research. I then hope to successfully identify key applications for such architected systems – mainly in the fields of mechanical or acoustic metamaterials, but also for biomedical engineering. Regarding foam solidification, understanding the mechanisms of pore opening during the solidification process – leading to either closed-cell or open-cell foams – is also an important question for the community.
That fantastic experience allowed me to work in a group with numerous people from many different backgrounds, pushing the frontiers of interdisciplinarity in ways I could not have imagined before joining the Rogers group as a postdoc. At the moment, I am focusing on more fundamental questions, but it is definitely important to keep in mind what physics and materials science can bring to a broad variety of applications that offer solutions for society, in biomedical engineering and beyond.
Your research often combines theory and experiment and involves interdisciplinary collaboration. How do you see these collaborations shaping the future of your field?
It is always the scientific questions we want to answer – or the goals we aim to achieve – that should define the collaborations, bringing together multiple skills and backgrounds to tackle a shared challenge. Clearly, at the intersection of physics, physical chemistry, materials science and mechanics, there are many interesting questions that require contributions from different disciplines and skillsets. A key aspect is how people trained in different areas learn to “speak the same language” in order to advance interdisciplinary topics.
3D structural analysis The team’s foam research projects make extensive use of X-ray microtomography on the MINAMEC platform at Institut Charles Sadron. (Courtesy: Aurélie Hourlier-Fargette)
How do you envision your research evolving over the next 5–10 years?
I hope to be able to combine fundamental research and meaningful applications successfully – perhaps in the form of medical devices or tools for soft robots. There are many exciting possibilities, but it is certainly still too early for me to predict.
What advice would you give early-career researchers pursuing interdisciplinary projects?
Believe in what you are doing! We push boundaries more easily in areas we are passionate about, and we are also more productive when we work on topics for which we have found a supportive environment – with a unique combination of collaborators and access to state-of-the-art equipment.
In research, and especially in interdisciplinary fields, a key challenge is finding the right balance: you need to stay focused on the research projects that matter for you, while also keeping an open mind and staying aware of what others are doing. This broader vision helps you understand how your work integrates into a larger, more complex landscape.
Finally, what inspires you most as a scientist, and what keeps you motivated during challenging phases of research?
I have always liked working with desktop-scale experiments, where we can touch the objects and have an intuition for the physical mechanisms behind the observed phenomena.
Another source of inspiration is the beauty of the scientific objects that we study. With droplets, bubbles and foams – which are not only scientifically interesting but also beautiful – there is a strong connection with art and photography.
And finally, a key aspect of our professional life is the people we work with. It is clearly an additional motivation to feel part of a community where we can discuss both scientific questions and ways to improve how research is organized, as well as help younger students, PhDs and postdocs find their professional path. Working with amazing colleagues definitely helps when the path is longer or more difficult than expected.
This webinar explores how smart shielding is transforming the design of Leksell Gamma Knife radiosurgery environments, shifting from bunker‑like spaces to open, patient‑centric treatment rooms. Drawing from dose‑rate maps, room‑dimension considerations and modern shielding innovations, we’ll demonstrate how treatment rooms can safely incorporate features such as windows and natural light, improving both functionality and patient experience.
Dr Riccardo Bevilacqua will walk through the key questions that clinicians, planners and hospital administrators should ask when evaluating new builds or upgrading existing treatment rooms. We will highlight how modern shielding approaches expand design possibilities, debunk outdated assumptions and offer practical guidance on evaluating sites and educating stakeholders on what lies “beyond bunkers”.
Dr Riccardo Bevilacqua, a nuclear physicist with a PhD in neutron data for Generation IV nuclear reactors from Uppsala University, has worked as a scientist for the European Commission and at various international research facilities. His career has transitioned from research to radiation safety and back to medical physics, the field that first interested him as a student in Italy. Based in Stockholm, Sweden, he leads global radiation‑safety initiatives at Elekta. Outside of work, Riccardo is a father, a stepfather and writes popular‑science articles on physics and radiation.
If you have ever watched a basketball match, you will know that along with the sound of the ball being bounced, there is also the constant squeaking of shoes as the players move across the court.
Such noise is a common occurrence in everyday life from the scraping of chalk on a blackboard to when brakes are applied on a bicycle.
Physicists in France, Israel, the UK and the US have now recreated the phenomenon in a lab and discovered that the squeaking is due to a previously unseen mechanism.
Previous studies looking at the effect suggested that “pulses” are created when two materials “stick and slip”, but such studies focused on slow movements, which do not create squeaks.
Bertoldi and team instead found that the noise was not caused by random stick-slip events, but rather deformations of the rubber sole pulsing in bursts, or rippling,across the surface.
In this case, small parts of the sole change shape and lose and regain contact with the surface, with the “ripple” travelling at near supersonic speeds.
The pitch of the squeak even matches the rate of the “bursts”, which is determined by the stiffness and thickness of the shoe sole.
The authors also found that if a soft surface is smooth, the pulses are irregular and produce no sharp sounds, whereas ridged surfaces – like the grip patterns on sports shoes – produce consistent pulse frequencies, resulting in a high-pitched squeak.
In another twist, lab experiments showed that in some instances, the slip pulses are triggered by triboelectric discharges – miniature lightning bolts caused by the friction of the rubber.
Indeed, the physics of these pulses share similar features with fracture fronts in plate tectonics, and so a better understanding the dynamicsthat occur between two surfaces may offer insights into friction across a range of systems.
“These results bridge two fields that are traditionally disconnected: the tribology of soft materials and the dynamics of earthquakes,” notes Shmuel Rubinstein from Hebrew University. “Soft friction is usually considered slow, yet we show that the squeak of a sneaker can propagate as fast as, or even faster than, the rupture of a geological fault, and that their physics is strikingly similar.”
An international team of researchers has shown that superconductivity can be modified by coupling a superconductor to a dark electromagnetic cavity. The research opens the door to the control of a material’s properties by modifying its electromagnetic environment.
Electronic structure defines many material properties – and this means that some properties can be changed by applying electromagnetic fields. The destruction of superconductivity by a magnetic field and the use of electric fields to control currents in semiconductors are two familiar examples.
There is growing interest in how electronic properties could be controlled by placing a material in a dark electromagnetic cavity that resonates with an electronic transition in that material. In this scenario, an external field is not applied to the material. Rather, interactions occur via quantum vacuum fluctuations within the cavity.
Holy Grail
“The Holy Grail of cavity materials research is to alter the properties of complex materials by engineering the electromagnetic environment,” explains the team – which includes Itai Keren, Tatiana Webb and Dmitri Basov at Columbia University in the US.
They created an optical cavity from a small slab of hexagonal boron nitride. This was interfaced with a slab of κ-ET, which is an organic low-temperature superconductor. The cavity was designed to resonate with an infrared transition in κ-ET involving the vibrational stretching of carbon–carbon bonds.
Hexagonal boron nitride was chosen because it is a hyperbolic van der Waals material. Van der Waals materials are stacks of atomically-thin layers. Atoms are strongly bound within each layer, but the layers are only weakly bound to each other by the van der Waals force. The gaps between layers can act as waveguides, confining light that bounces back and forth within the slab. As a result the slab behaves like an optical cavity with an isofrequency surface that is a hyperboloid in momentum space. Such a cavity supports a large number of modes and vacuum fluctuations, which enhances interactions with the superconductor.
Superfluid suppression
The researchers found that the presence of the cavity caused a strong suppression of superfluid density in κ-ET (a superconductor can be thought of as a superfluid of charged particles). The team mapped the superfluid density using magnetic force microscopy. This involved placing a tiny magnetic tip near to the surface of the superconductor. The magnetic field of the tip cannot penetrate into the superconductor (the Meissner effect) and this results in a force on the tip that is related to the superfluid density. They found that the density dropped by as much as 50% near the cavity interface.
The team also investigated the optical properties of the cavity using scattering-type scanning near-field optical microscope (s-SNOM). This involves firing tightly-focused laser light at an atomic force microscope (AFM) tip that is tapping on the surface of the cavity. The scattered light is processed to reveal the near-field component of light from just the region of the cavity below the tip .
The tapping tip creates phonon polaritons in the cavity, which are particle-like excitations that couple lattice vibrations to light. Analysing the near-field light across the cavity confirmed that the carbon stretching mode of κ-ET is coupled to the cavity. Calculations done by the team suggest that cavity coupling reduces the amplitude of the stretching mode vibrations.
Physicists know that superconductivity can arise from interactions between electrons and phonons (lattice vibrations), So, it is possible that the reduction in superfluid density is related to the suppression of stretching-mode vibrations. This, however, is not certain because κ-ET is an unconventional superconductor, which means that physicists do not understand the mechanism that causes its superconductivity. Further experiments could therefore shed light on the mysteries of unconventional superconductors.
“We are confident that our experiments will prompt further theoretical pursuits,” the team tells Physics World. The researchers also believe that practical applications could be possible. “Our work shows a new path towards the manipulation of superconducting properties.”
On 26 April 2026, it will be 40 years since the explosion at Unit 4 of the Chernobyl Nuclear Power Plant – the worst nuclear accident the world has known. In the early hours of 26 April 1986, a badly designed reactor, operated under intense pressure during a safety test, ran out of control. A powerful explosion and prolonged fire followed, releasing radioactive material across Ukraine, Belarus, Russia, with smaller quantities spewing across Europe.
In this episode of Physics World Stories, host Andrew Glester speaks with Jim Smith, an environmental physicist at the University of Portsmouth. Smith began his academic life studying astrophysics, but always had an interest in environmental issues. His PhD in applied mathematics at Liverpool focused on modelling how radioactive material from Chernobyl was transported through the atmosphere and deposited as far away as the Lake District in north-western England.
Smith recounts his visits to the abandoned Chernobyl plant and the 1000-square-mile exclusion zone, now home to roaming wolves and other thriving wildlife. He wants a rational debate about the relative risks, arguing that the accident’s social and economic consequences have significantly outweighed the long-term impacts of radiation itself.
The discussion ranges from the politics of nuclear energy and the hierarchical culture of the Soviet system, to lessons later applied during the Fukushima accident. Smith makes the case for nuclear power as a vital complement to renewables.
He also shares the story behind the Chernobyl Spirit Company – a social enterprise he has launched with Ukrainian colleagues, producing safe, high-quality spirits to support Ukrainian communities. Listen to find out whether Andrew Glester dared to try one.