An analysis of more than 5.2 million papers in 5000 different journals has revealed a dramatic rise in the use of artificial intelligence (AI) tools in academic writing across all scientific disciplines, especially physics.
However, the analysis has revealed a big gap between the number of researchers who use AI and those who admit to doing so – even though most scientific journals have policies requiring the use of AI to be disclosed.
Carried out by data scientist Yi Bu from Peking University and colleagues, the analysis looks at papers that are listed in the OpenAlex dataset and were published between 2021 and 2025.
To assess the impact of editorial guidelines introduced in response to the growing use of generative AI tools such as ChatGPT, they examined journal AI-writing policies, looked at author disclosures and used AI to see if papers had been written with the help of technology.
The AI detection analysis reveals that the use of AI writing tools has increased dramatically across all scientific disciplines since 2023. It also finds that 70% of journals have adopted AI policies, which primarily require authors to disclose the use of AI-writing tools.
IOP Publishing, which publishes Physics World, for example, has a journals policy that supports authors who use AI in a “responsible and appropriate” manner. It encourages authors, however, to be “transparent about their use of any generative AI tools in either the research or the drafting of the manuscript”.
A new framework
But in the new study, a full-text analysis of 75 000 papers published since 2023, reveals that only 76 articles (about 0.1% of the total) explicitly disclosed the use of AI writing tools.
In addition, the study finds no significant difference in the use of AI between journals that have disclosure policies and those that do not, which suggests that disclosure requirements are being ignored – what the authors call a “transparency gap”.
The study also finds that researchers from non-English-speaking countries are more likely to rely on AI writing tools than native English speakers. Increases in the use of AI writing tools are found to be particularly rapid in journals with high levels of open-access publishing.
The authors now call for a re-evaluation of ethical frameworks to foster responsible AI integration in science. They state that prohibition or disclosure requirements are insufficient to regulate AI use, with their results showing that researchers are not complying with policies.
The authors argue that instead of “opposition and resistance”, “proactive engagement and institutional innovation” is needed “to ensure AI technology truly enhances the value of science”.
Humanity has had a complicated relationship with machines and technology for centuries. While we created these inventions to make our lives easier, and have become heavily reliant upon them, we have often feared their impact on society.
In her debut book, The Body Digital: a Brief History of Humans and Machines from Cuckoo Clocks to ChatGPT, Vanessa Chang tells the story of this symbiotic partnership, covering tools as diverse as the self-playing piano and generative AI products. The short book combines creative storytelling, an inward look at our bodies and interpersonal relationships, and a detailed history of invention. Chang – who is the director of programmes at Leonardo, the International Society for the Arts, Sciences, and Technology in California – offers us a framework for examining future worlds based on the relationship between humanity and machines.
“Technology” has no easy definition. The Body Digital therefore takes a broad approach, looking at software, machines, infrastructure and tools. Chang examines objects as mundane as the pen and as complex as the road networks that define our cities. She focuses on the interplay between machine and human: how tools have lightened our load and become embedded in our behaviour. In doing this she asks the reader: is it possible for the human body to extract itself from technology?
Each chapter of the book centres on a different part of the human anatomy – hand, voice, ear, eye, foot, body and mind – looking at the historical relationship between that body part and technology. Chang follows this thread through to the modern day and the large-scale impact these technologies have had on the development of our communities, communications and social structures. The chapters are a vehicle for Chang to present interesting pieces of history and discussions about society and culture. Her explanations are tightly knit, and the book covers huge ground in its relatively concise page count.
Chang avoids “doomerism”, remaining even-handed about our reservations towards technological advancement. She is careful in her discussion of new technology, particularly those that are often fraught in the public discourse, such as the use of generative AI in creating art, and the potential harms of facial-recognition software.
She includes genuine concerns – like biases creeping into training data for large language models – but mitigates these fears by discussing how technologies have become enmeshed in human culture through history. Our fear of some technologies has been unfounded – take, for example, the idea that the self-playing piano would supersede live piano concerts. These debates, Chang argues, have happened throughout the history of technology, and some of the same arguments from the past can easily be applied to future technology.
While this commentary is often thought-provoking, it sometimes doesn’t go as far as it might. There is relatively limited discussion throughout the book about the technological ecosystem we currently live in and how that might impact our level of optimism about the future. In particular, the topics of human labour being supplanted by machine labour, and the impacts of tech monoliths like Apple and Google, are relatively minimal.
In one example, Chang discusses the ways in which “telecommunication technologies might serve as channels into the afterlife”, allowing us to use technology to artificially recreate the voices of our loved ones after death. While the book contains a full discussion of how uncanny and alarming this type of “artistic necrophilia” might be, Chang tempers fear by pointing out that by being careful with our data, careful with our digital selves, we might be able to “mitigate the transformation of [our] voices into pure commodities”. However, the questions of who controls our data, the relationship between data and capital, and the level of control that we have over the use of our data, is somewhat limited.
Poetic technology
The difference between offering interesting ideas and overexplaining is a hard needle to thread, and one that Chang navigates successfully. One striking feature of The Body Digital is the quality of the prose. Chang has a background in fiction writing and her descriptions reflect this. An automaton is anthropomorphized as a “petite, barefoot boy” with a “cloud of brown hair”; and the humble footpath is described as “veer[ing] at a jaunty angle from the pavement, an unruly alternative to concrete”. As a consequence, her ideas are interesting and memorable, making the book readable and often moving.
Particularly impressive is Chang’s attitude to exposition, which mimics fiction’s age-old adage of “show, don’t tell”. She gives the reader enough information to learn something new in context and ask follow-up questions, without banging the reader over the head with an answer to these questions. The book mimics the same relationship between the written word and human consciousness that Chang discusses within it. The Body Digital marinates with the reader in the way any good novel might, while teaching them something new.
The result is a poetic and well-observed text, which offers the reader a different way of understanding humanity’s relationship with technology. It reminds us that we have coexisted with machines throughout the history of our species, and that they have been helpful and positively shaped the direction of our world. While she covers too much ground to gaze in any one direction for too long, the reader is likely to come away enriched and perhaps even hopeful. And, as Chang points out, we have the opportunity to shape the future of technology, by “attending to the rich, idiosyncratic intelligence of our bodies”.
2025 Melville House Publishing 256pp £14.99 pb / £9.49 ebook
Genuine multipartite entanglement is the strongest form of entanglement, where every part of a quantum system is entangled with every other part. It plays a central role in advanced quantum tasks such as quantum metrology and quantum error correction. To detect this deep form of entanglement in practice, researchers often use entanglement witnesses which are fast, experimentally friendly tests that certify entanglement whenever a measurable quantity exceeds a certain bound.
In this work, the researchers significantly extend previous witness‑construction methods to cover a much broader family of multipartite quantum states. Their approach is built within the multi‑qudit stabiliser formalism, a powerful framework widely used in quantum error correction and known for describing large classes of entangled states, both pure and mixed. They generalise earlier results in two major directions: (i) to systems with arbitrary prime local dimension, going far beyond qubits, and (ii) to stabiliser subspaces, where the stabiliser defines not just a single state but an entire entangled subspace.
This generalisation allows them to construct witnesses tailored to high‑dimensional graph states and to stabiliser‑defined subspaces, and they show that these witnesses can be more robust to noise than those designed for multiqubit systems. In particular, witnesses tailored to GHZ‑type states achieve the strongest resistance to white noise, and in some cases the authors identify the most noise‑robust witness possible within this construction. They also demonstrate that stabiliser‑subspace witnesses can outperform graph‑state witnesses when the local dimension is greater than two.
Overall, this research provides more powerful and flexible tools for detecting genuine multipartite entanglement in noisy, high‑dimensional and computationally relevant quantum systems. It strengthens our ability to certify complex entanglement in real‑world quantum technologies and opens the door to future extensions beyond the stabiliser framework.
Acoustic waves are usually thought of as purely longitudinal, moving back and forth in the direction the wave is travelling and having no intrinsic rotation, therefore no spin (spin‑0). Recent work has shown that acoustic waves can in fact carry local spin‑like behaviour. However, until now, the total spin angular momentum of an acoustic field was believed to vanish, with the local positive and negative spin contributions cancelling each other to give an overall global spin‑0. In this work, the researchers show that acoustic vortex beams can carry a non‑zero longitudinal spin angular momentum when the beam is guided by certain boundary conditions. This overturns the long‑held assumption that longitudinal waves cannot possess a global spin degree of freedom.
Using a self‑consistent theoretical framework, the researchers derive the full spin, orbital and total angular momentum of these beams and reveal a new kind of spin–orbit interaction that appears when the beam is compressed or expanded. They also uncover a detailed relationship between the two competing descriptions of angular momentum in acoustics which are canonical‑Minkowski and kinetic‑Abraham. They demonstrate that only the canonical‑Minkowski form is truly conserved and directly tied to the beam’s azimuthal quantum number, which describes how the wave twists as it travels.
The team further demonstrates this mechanism experimentally using a waveguide with a slowly varying cross‑section. They show that the effect is not limited to this setup, it can also arise in evanescent acoustic fields and even in other wave systems such as electromagnetism. These results introduce a missing fundamental degree of freedom in longitudinal waves, offer new strategies for manipulating acoustic spin and orbital angular momentum, and open the door to future applications in wave‑based devices, underwater communication and particle manipulation.
Quantum-entangled sensors placed over a kilometre apart could allow interferometric measurements of optical light with single photon sensitivity, experiments in the US suggest. While this proof-of-principle demonstration of a theoretical proposal first made in 2012 is not yet practically useful for astronomy, it marks a significant step forward in quantum sensing.
Radio telescopes are often linked together to provide more detailed images with better angular resolution than would otherwise be possible. The Event Horizon Telescope array, for example, performs very long baseline interferometry of signals from observatories on four continents to take astrophysical images such as the first picture of a black hole in 2019. At shorter wavelengths, however, much weaker signals are often parcelled into higher-energy photons. “You start getting this granularity at the single photon level,” says Pieter-Jan Stas at Harvard University.
According to textbook quantum mechanics, one can create an interferometric image from single photons by recombining their paths at a single detector – provided that their paths are not measured before then. This principle is used in laboratory spectroscopy. In astronomical observations, however, attempting to transport single photons from widely spread telescopes to a central detector would almost certainly result in them being lost. The baseline of infrared and optical telescopes is therefore restricted to about 300 m.
In 2012, theorist Daniel Gottesman, then at the Perimeter Institute for Theoretical Physics in Canada, and colleagues proposed using a central single source of entangled photons as a quantum repeater to generate entanglement between two detection sites, putting them into the same quantum state. The effect of an incoming photon on this combined state could therefore be measured without having to recombine the paths and collect the photon at a central detector.
Hidden information
“In reality, the photon will be in a superposition of arriving at both of the detectors,” says Stas. “That’s where this advantage comes from – you have this photon that is delocalized and arrives at both the left and the right station – so you truly have this baseline that helps you with improving your resolution, but to do this you have to keep the ‘which path’ information hidden.”
The 2012 proposal was not thought to be practical, because it required distributing entanglement at a rate comparable with the telescope’s spectral bandwidth. In 2019, however, Harvard’s Mikail Lukin and colleagues proposed integrating a quantum memory into the system. In the new research, they demonstrate this in practice.
The team used qubits made from silicon–vacancy centres in diamond. These can be very long lived because the spin of the centre’s electron (which interacts with the photon) is mapped to the nuclear spin, which is very stable. The researchers used a central laser as a coherent photon source to generate heralded entanglement to certify that the qubits were event-ready. “It’s not like you have to receive the space signal to be simultaneous with the arrival of the photon,” says team member Aziza Suleymanzade at the University of California, Berkeley. “In our case, we distribute entanglement, and it has some coherence time, and during that time you can detect your signal.”
Using two detectors placed in adjacent laboratories and synthetic light sources, the researchers demonstrated photon detection above vacuum fluctuations in fibres over 1.5 km in length. They acknowledge that much work remains before this can be viable in practical astronomy, such as a higher rate of entanglement generation, but Stas says that “this is one step towards bringing quantum techniques into sensing”.
Similar work in China
The research is described in Nature. Researchers in China led by Jian-Wei Pan have achieved a similar result, but their work has yet to be peer reviewed.
Yujie Zhang of the University of Waterloo in Canada points out that Lukin and colleagues have done similar work on distributed quantum communication and the quantum internet. “The major difference is that for most of the original protocols, what people care about is trying to entangle different quantum memories in the quantum network so then they can do gates on those quantum memories,” he says. “There’s nothing about extra information from the environment…This one is different in that they have to get the information mapped from the starlight to their quantum memory.” He notes several difficulties acknowledged by the researchers – such as that vacancy centres are very narrowband, but says that now people know the system can work, they can work to show that it can beat classical systems in practice.
“I think this is definitely a step towards [realizing the protocol envisaged in 2012],” says Gottesman, now at the University of Maryland, College Park. “There have been previous experiments where they generated the entanglement and they did some interference but they didn’t have the repeater aspect, which is the real value-added aspect of doing quantum-assisted interferometry. Its rate is still well short of what you’d need to have a functioning telescope, but this is putting one of the important pieces into place.”
The heads of university physics departments in the UK have published an open letter expressing their “deep concern” about funding changes announced late last year by UK Research and Innovation (UKRI), the umbrella organization for the UK’s research councils.
Addressed to science minister Patrick Vallance, the letter says the cuts are causing “reputational risk” and calls for “strategic clarity and stability” to ensure that UK physics can thrive.
It has so far been signed by 58 people who represent 45 different universities, including Birmingham, Bristol, Cambridge, Durham, Imperial College, Liverpool, Manchester and Oxford.
The letter says that the changes at UKRI “risk undermining science’s fundamental role in improving our prosperity, health and quality of life, as well as delivering sustainable growth through innovation, productivity and scientific leadership”.
The signatories warn that the UK’s international standing in physics is “a strategic asset” and that areas such as particle physics, astronomy and nuclear physics are “especially important”.
Raising concerns
The decision by the heads of physics to write to Vallance comes in the wake of UKRI stating in December that it will be adjusting how it allocates government funding for scientific research and infrastructure.
The Science and Technology Facilities Council (STFC), which is part of UKRI, stated that projects would need to be cut given inflation, rising energy costs as well as “unfavourable movements in foreign exchange rates” that have increased STFC’s annual costs by over £50m a year.
The STFC noted that it would need to reduce spending from its core budget by at least 30% over 2024/2025 levelswhile also cutting the number of projects financed by its infrastructure fund.
The council has already said two UK national facilities – the Relativistic Ultrafast Electron Diffraction and Imaging facility and a mass spectrometry centre dubbed C‑MASS – will now not be prioritized.
In addition, two international particle-physics projects will not be supported: a UK-led upgrade to the LHCb experiment at CERN as well as a contribution to the Electron-Ion Collider at the Brookhaven National Laboratory that is currently being built.
Philip Burrows, director of the John Adams Institute for Accelerator Science at the University of Oxford, who is one of the signatories of the letter, told Physics World that the cuts are “like buying a Formula-1 car but not being able to afford the driver”.
Burrows admits that the STFC has been hit “particularly hard” by its flat-cash settlement, given that a large fraction of its expenditure pays the UK’s subscriptions to international facilities and operating the UK’s flagship national facilities.
But because most of the rest of the STFC’s budget supports scientists to do research at those facilities, he is concerned that the funding cuts will fall disproportionately on the science programme.
“Constraining these areas risks weakening the very talent pipeline on which the UK’s innovation economy depends,” the letter states. “Fundamental physics also delivers substantial public engagement and cultural impact, strengthening public support for science and reinforcing the UK’s reputation as a global scientific leader.”
The signatories also say they are “particularly concerned” about the UK’s capacity to lead the scientific exploitation of major international projects. “An abrupt pause in funding for key international science programmes risks damaging UK researchers’ competitive advantage into the 2040s,” they note.
The letter now calls on the government to work with UKRI and STFC to “stabilize” curiosity-driven grants for physics within STFC “at a minimum of flat funding in real terms” as well as protect postdocs, students and technicians from the cuts.
It also calls on the UK to develop a long-term strategy for infrastructure and call on the government to address facilities cost pressures through “dedicated and equitable mechanisms so that external shocks do not singularly erode the UK’s research base in STFC-funded research areas”.
The news comes as Michele Dougherty today formally stepped down from her role as IOP president. Dougherty, who also holds the position of executive chair of the STFC, had previously stepped back from presidential duties on 26 January due to a conflict of interest.
Paul Howarth, who has been IOP president-elect since September, will now become IOP president.
The Earth’s magnetic poles have reversed 540 times over the past 170 million years. Usually, these reversals are relatively speedy in geological terms, taking around 10,000 years to complete. Now, however, scientists in the US, France and Japan have found evidence of much slower reversals deep in Earth’s geophysical past. Their findings could have important implications for our understanding of Earth’s climate and evolutionary history.
Scientists think the Earth’s magnetic field arises from a dynamo effect created by molten metal circulating inside the planet’s outer core. Its consequences include the bubble-like magnetosphere, which shields us from the solar wind and cosmic radiation that would otherwise erode our atmosphere.
From time to time, this field weakens, and the Earth’s magnetic north and south poles switch places. This is known as a geomagnetic reversal, and we know about it because certain types of terrestrial rocks and marine sediment cores contain evidence of past reversals. Judging from this evidence, reversals usually take a few thousand years, during which time the poles drift before settling again on opposite sides of the globe.
Looking into the past
Researchers led by Yuhji Yamamoto of Kochi University, Japan and Peter Lippert at the University of Utah, US, have now identified two major exceptions to this rule. Drawing on evidence obtained during the Integrated Ocean Drilling Program expedition in 2012, they say that around 40 million years ago, during the Eocene epoch, the Earth experienced two reversals that took 18,000 and 70,000 years.
The team based these findings on cores of sediment extracted off the coast of Newfoundland, Canada, up to 250 metres below the seabed. These cores contain crystals of magnetite that were produced by a combination of ancient microorganisms and other natural processes. The iron oxide particles within these crystals align with the polarity of the Earth’s magnetic field at the time the sediments were deposited. Because marine sediments are far less affected by erosion and weathering than sediments onshore, Yamamoto says the information they preserve about past Earth environments – including geomagnetic conditions – is exceptionally clean.
Significance for evolutionary history
The team says the difference between a geomagnetic reversal that takes 10,000 years and one that takes 70,000 years is significant because prolonged intervals of weaker geomagnetic fields would have exposed the Earth to higher amounts of cosmic radiation for longer. The effects on living creatures could have been devastating, says Lippert. As well as higher rates of genetic mutations due to increased radiation, he points out that organisms from bacteria to birds use the Earth’s magnetic field while navigating. “A lower strength field would create sustained pressures on these organisms to adapt,” he says.
If humans had existed at the time of these reversals, the effects on our species could have been similarly profound. “Modern humans (Homo sapiens) are thought to have begun dispersing out of Africa only about 50,000 years ago,” Yamamoto observes. “If a geomagnetic reversal can persist for a period comparable to – or even longer than – this timescale, it implies that the Earth’s environment could undergo substantial and continuous change throughout the entire period of human evolution.”
Although our genetic ancestors dodged that particular bullet, Yamamoto thinks the team’s findings, which are published in Nature Communications Earth & Environment, offer a valuable perspective on how evolution and environmental change could interact in the future. “This period corresponds to an epoch when Earth was far warmer than it is today, and when Greenland is thought to have been a truly ‘green land’,” he explains. “We also know that atmospheric CO₂ concentrations during this era were comparable to levels projected for the end of this century, making it an important ‘climate analogue’ for understanding near‑future climate conditions.”
The discovery could also have more direct implications for future life on Earth. The magnitude of the Earth’s magnetic field has decreased by around 5% in each century since records began. This decrease, combined with the slow drift of our current magnetic North Poletowards Siberia, could indicate that we are in the early stages of a new geomagnetic reversal. Re‑evaluating the duration of such reversals is thus not only an issue for geophysicists, Yamamoto says. It’s also an important opportunity to reconsider fundamental questions about how we should coexist with our planet and how we ought to confront a continually changing environment.
Motivation for future studies
John Tarduno, a geophysicist at the University of Rochester, US, who was not involved in the study, describes it as “outstanding” work that “documents an exciting discovery bearing on the nature of magnetic shielding through time and the geomagnetic reversal process”. He agrees that reduced shielding could have had biotic effects, and adds that the discovery of long reversal transitions could influence scientific thinking on the statistics of field reversals – including questions of whether the field retains some “memory” of previous events. “This new study will provide motivation to examine reversal transitions at very high resolution,” Tarduno says.
For their next project, Yamamoto and colleagues aim to use sequences of lava flows in Iceland to analyse how the Earth’s magnetic field evolved. Lippert’s team, for its part, will be studying features called geomagnetic excursions that appear in both deep sea and terrestrial sediments. Such excursions are evidence of short-lived, incomplete attempts at field reversals, and Lippert explains that they can be excellent stratigraphic markers, helping scientists correlate records on geological timescales and compare them with samples taken from different parts of the world. “Excursions, like long reversals, can inform our understanding of what ultimately causes a geomagnetic field reversal to start and persist to completion,” he says.
Fusion adopter Debbie Callahan is chief strategy officer at Focused Energy. (Courtesy: Focused Energy)
With the world’s energy demands increasing, and our impact on the climate becoming ever clearer, the search is on for greener, cleaner energy production. That’s why research into fusion energy is undergoing something of a renaissance.
Construction of the International Thermonuclear Experimental Reactor (ITER) in France – the world’s largest fusion experiment – is currently under way, while there are numerous other large-scale facilities and academic research projects too. There has also been a rise in the number of smaller commercial companies joining the race.
One person at the forefront of fusion research is Debbie Callahan – a plasma physicist who spent 35 years working at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory in the US. She is now chief strategy officer at Focused Energy, a laser-fusion firm based in Germany and California, which is trying to generate energy from the laser-driven fusion of hydrogen isotopes.
Callahan recently talked to Physics World online editor Hamish Johnston about working in the fusion sector, Focused Energy’s research and technology, and the career opportunities available. The following is an edited extract of their conversation, which you can hear in full on the Physics World Weekly podcast.
How does NIF’s approach to fusion differ from that taken by magnetic confinement facilities such as ITER?
To get fusion to happen, you need three elements that we sometimes call the triple product. You need a certain amount of density in your plasma, you need temperature, and you need time. The product of those has to be over a certain value.
Magnetic fusion and inertial fusion are kind of the opposite of each other. In a magnetic fusion system like ITER, you have a low-density plasma, but you hold it for a long time. You do that by using magnetic fields that trap the plasma and keep it from escaping.
In inertial fusion – like at NIF – it’s the opposite. You don’t hold the plasma together at all, it’s only held by its own inertia, and you have a very high density for a short time. In both cases, you can make fusion happen.
What is the current state of the art at NIF, in terms of how much energy you have to put in to achieve fusion versus how much you get out?
To date, the best shot at NIF – by which I mean an individual, high-energy laser bombardment of the target capsule – occurred during an experiment in April 2025, which had a target gain of about 4.1. That means that they got out 4.1 times the amount of energy that they put in. The incident laser energy for those shots is around two megajoules, so they got out about eight megajoules.
This is a tremendous accomplishment that’s taken decades to get to. But to make inertial fusion energy successful and use it in a power plant, we need significantly higher gains of more like 50 to 100.
Captured beams The target chamber of the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory. NIF has demonstrated that inertial fusion can work with deuterium–tritium fuel, but it is a research facility not a commercial endeavour. (Courtesy: Lawrence Livermore National Laboratory/Damien Jemison)
Can you explain Focused Energy’s approach to fusion?
Focused Energy was founded in July 2021, and has offices in the US and Germany. Just a month later, we achieved fusion ignition, which is when the fusion fuel becomes hot enough for the reactions to sustain themselves through their own internal heating (it is not the same as gain).
At NIF lasers are fired into a small cylinder of gold or depleted uranium and the energy is converted into X-rays, which then drive the capsule. It’s what’s called laser indirect drive. At Focused Energy, however, we’re directly driving the capsule. The laser energy is put directly on the capsule, with no intermediate X-rays.
The advantage of this approach is that converting laser energy to X-rays is not very efficient. It makes it much harder to get the high target gains that we need. At Focused Energy, we believe that direct drive is the best option for fusion energy to get us to a gain of over 50.
So is boosting efficiency one of your key goals to make fusion practical at an industrial level?
Yes, exactly. You have to remember that NIF was funded for national security purposes, not for fusion energy. It wasn’t designed to be a power plant – the goal was just to generate fusion energy for the first time.
In particular, the laser at NIF is less than 1% efficient but we believe that for fusion power generation, the laser needs to be about 10% efficient.
So one of the big thrusts for our company is to develop more efficient lasers that are driven by diodes – called diode pump solid state lasers.
Can you tell us about Focused Energy’s two technologies called LightHouse and Pearl Fuel?
LightHouse is our fusion pilot plant. When operational, it will be the first power plant to produce engineering gain greater than one, meaning it will produce more energy than it took to drive it. In other words, we’ll be producing net electricity.
For NIF, in contrast, gain is the amount of energy out relative to the amount of laser energy in. But the laser is very inefficient, so the amount of electricity they had to put in to produce that eight megajoules of fusion energy is a lot.
Meanwhile, Pearl is the capsule the laser is aimed at in our direct drive system. It’s filled with deuterium–tritium fuel derived from sea water and lithium.
Rejuvenating nuclear A rendering of Focused Energy’s proposed fusion power plant at the Biblis fission power plant in Germany, which was shut down in 2011. (Courtesy: Focused Energy)
How do you develop the capsule to absorb the laser energy and give as much of it to the fuel as possible?
The development of the capsule for a fusion power plant is quite complicated. First, we need it to be a perfect sphere so it compresses spherically. The materials also need to efficiently absorb the laser light so you can minimize the size of that laser.
You have to be able to cheaply and quickly mass produce these targets too. While NIF does 400 shots per year, we will need to do about 900,000 shots a day – about 10 per second. We’ll also have to efficiently remove the exploded target material from the reactor chamber so that it can be cleared for the next shot.
It’s a very complicated design that needs to bring together all the pieces of the power plant in a consistent way.
When you are designing these elements, what plays a bigger role – computer simulations or experiments?
Computer simulations play a large part in developing these designs. But one of the lessons that I learned from NIF was that, although the simulation codes are state of the art, you need very precise answers, and the codes are not quite good enough – experimental data play a huge role in optimizing the design. I expect the same will be true at Focused Energy.
A third factor that’s developing is artificial intelligence (AI) and machine learning. In fact, at Livermore, a project working on AI contributed to achieving gain for the first time in December 2022. I only see AI’s role in fusion getting bigger, especially once we are able to do higher repetition rate experiments, which will provide more training data.
What intellectual property (IP) does Focused Energy have in addition to that for the design of the Pearl target and the LightHouse plant?
We also have IP in the design of the lasers – they are not the same lasers as used at NIF. And I think there’ll be a lot of IP around how we fabricate the targets. After all, it’s pretty complicated to figure out how to build 900,000 targets a day at a reasonable cost.
We’ll see a lot of IP coming out of this project in those areas, but there’s also the act of putting it all together. How we integrate these things in order to make a successful plant is important.
What are the challenges of working with deuterium and tritium as materials for fusion?
We chose deuterium and tritium because they are the easiest elements to fuse, and have been successfully demonstrated as fusion fuel by NIF.
Deuterium can be found naturally in sea water, but getting tritium – which is radioactive – is more complicated. We breed it from lithium. Our reactor designs have lithium in them, and the neutrons from the fusion reactions breed the tritium.
Making sure that we have enough tritium, and figuring out how to extract that material to use it for future shots, is a big task. We have to be able to breed enough tritium to keep the plant going.
To work on this, we have a collaboration funded by the US Department of Energy to work with Savannah River National Lab in South Carolina. They have a lot of expertise in designing these tritium-extraction systems.
How will you capture the heat from the deuterium–tritium fusion reaction?
We will use a conventional steam cycle to convert the heat into electricity. It’s funny – we’ll have this very hi-tech way of producing heat, but at the end of the day, we will use a traditional system to produce the electricity from that heat.
So what’s the timeline on development?
Our plan is to have a pilot plant up by the end of the 2030s. It’s a fairly aggressive timeline given the things that we have to do. But that’s part of being a start-up – we have to take some risks and try to move quickly to achieve our goal.
To help that we have, in my view, a superpower – we have one foot in Europe and one foot in the US. There are a lot of opportunities between the two continents to partner with other companies, universities and governments. I think that makes us strong because we have access to some of the best talent from around the world.
How does working at Focused Energy compare with life as an academic at Lawrence Livermore?
There are a lot of similarities. My role now is to bring the knowledge and skills I learned at NIF to Focused Energy, so it’s been a natural transition.
In fact, there was a lot of pressure working at NIF. We were trying to move very quickly, so it’s actually very similar to working in a start-up like Focused Energy.
One of the big differences is the level of bureaucracy. Working for a government-funded lab meant there were lots of rules and paperwork, which takes up your time and you don’t always see the value in it.
In contrast, working for a small start-up means we can move more quickly because we don’t have as many of those kinds of constraints. Personally, I find that great because it leaves more time for the fun and interesting things – like trying to get fusion on the grid.
Are you still involved in academic research in any way?
As a firm, we are still out there collaborating with academics. Last year, for example, we gave four separate presentations at the American Physical Society Division of Plasma Physics meeting.
Active collaboration Debbie Callahan presenting the work of Focused Energy at the IEEE Pulsed Power and Plasma Science Conference in Berlin in June 2025. (Courtesy: Focused Energy)
I feel very strongly about peer review. Of course, publishing isn’t our number one priority, but we need feedback from others. We’re trying to do something that no-one’s done before, so it’s important to have our colleagues give us feedback on what we’re doing, point out mistakes we’re making or things we’re forgetting.
Working with universities and national labs in both Europe and the US is vital. Communicating with others in the field is important for us to get to where we want to go.
And of course, being an active part of the fusion community is good for recruitment too. We regularly give presentations at conferences that students attend. We meet those students and they learn about our work – and they might be future employees for our company.
What’s your advice for early-career physicists keen on joining the fusion industry?
There are so many opportunities right now, especially compared to the start of my career when the work was mainly just at universities or national labs. Nowadays, there are a lot of companies in the sector. Not all of them will survive because there’s only so much money, but there are still lots of opportunities. If you’re interested in fusion energy, go for it.
The field is always developing. There’s new stuff happening every day – and new problems. So if you like problem-solving, it’s great, especially if you want to do something good for the world.
There are also opportunities for people who are not plasma physicists. At Focused Energy we have people across so many fields – those who work on lasers, others who work on reactor design, some developing the AI and machine learning, and those who work on target physics, like me. To achieve fusion energy, we need physicists, engineers, mathematicians and computer scientists. We need researchers, technicians and operators. There’s going to be tremendous growth in this sector.
This winter in Bristol has been even gloomier than usual – so I was really looking forward to the Bristol Light Festival 2026. We went on the last evening of the event (28 February) and we were blessed with dry weather and warmish temperatures.
The festival featured 10 illuminated installations that were scattered throughout Bristol and the crowds were out in force to enjoy them. I wasn’t expecting to be thinking about physics as I wandered through town, but that’s exactly what I found myself doing at an installation called The Midnight Ballet by the British sculptor Will Budgett. Rather appropriately, it was located next to the HH Wills Physics Laboratory at the University of Bristol.
The display comprises seven sculptures that are illuminated from two different directions. The result is two very different images of ballerinas projected onto two screens (see image).
Art and science
So, why was I thinking about physics while admiring the work? To me the pieces embody – in a purely artistic way – the idea of superposition and measurement in quantum mechanics. A sculpture is capable of producing two different images (a superposition of states), but neither of these images is observable until a sculpture is illuminated from specific directions (the measurements).
Now, I know that this analogy is far from perfect. Measurements can be made simultaneously in two orthogonal planes, for example. But, Budgett’s beautiful artworks really made me think about quantum physics. Given the exhibit’s close proximity to the university’s physics department, I suspect I am not the only one.
In 1942 physicists in Chicago, led by Enrico Fermi, famously produced the world’s first self-sustaining nuclear chain reaction. But it was to be another nine years before electricity was generated from fission for the first time. That landmark event occurred in 1951 when the Experimental Breeder Reactor-I in southern Idaho powered a string of four 200-watt light bulbs.
Our ability to harness nuclear power has been under constant development since then. In fact, according to the Nuclear Energy Association, a record 2667 terrawatt-hours of electricity was generated by nuclear reactors around the world in 2024 – up 2.5% on the year before. But what, I wonder, is the potential of nuclear-powered transport?
A “nuclear engine” has many advantages, notably providing a vehicle with an almost unlimited supply of onboard power, with no need for regular refuelling. That’s particularly attractive for large ships and submarines, where fuel stops at sea are few and far between. It’s even better for space craft, which cannot refuel at all.
The downside is that a vehicle needs to be fairly large to carry even a small nuclear fission reactor – plus all the heavy shielding to protect passengers onboard. Stringent safety requirements also have to be met. If the vehicle were to crash or explode, the shield around the reactor needs to stay fully intact.
Ships and planes
Perhaps the best known transport application of nuclear power is at sea, where it’s used for warships, submarines and supercarriers. The world’s first nuclear-powered ship was the US Navy submarine Nautilus, which was launched in 1954. As the first vessel to have a nuclear reactor for propulsion, it revolutionized naval capabilities.
Compared to oil or coal-fired ships, nuclear-powered vessels can travel far greater distances. All the fuel is in the reactor, which means there is no need for additional fuel be carried onboard – or for exhaust chimneys or air intakes. Even better, the fuel is relatively cheap. But operating and infrastructure costs are steep, which is why almost all nuclear-powered marine vessels belong to the military.
There have, however, been numerous attempts to develop other forms of nuclear-powered transport. While a nuclear-powered aircraft might seem unlikely, the idea of flying non-stop to the other side of the world, without giving off any greenhouse-gas emissions, is appealing. Incredible as it might seem, airborne nuclear reactors were actually trialled in the mid-1950s.
That was when the United States Air Force converted a B-36 bomber to carry an operational air-cooled reactor, weighing around 18 tons. The aircraft was not actually nuclear powered but it was operated in this configuration to assess the feasibility of flying a nuclear reactor. The aircraft made a total of 47 flights between July 1955 and March 1957.
In 1955 the Soviet Union also ran a project to adapt a Tupolev Tu-95 “Bear” aircraft for nuclear power. However, because of the radiation hazard to the crew and the difficulties in providing adequate shielding, the project was soon abandoned. Neither the American or the Soviet atomic-powered aircraft ever flew and – because the technology was inherently dangerous – it was never considered for commercial aviation.
Cars and trains
The same fate befell nuclear-powered trains. In 1954 the US nuclear physicist Lyle Borst, then at the University of Utah, proposed a 360-tonne locomotive carrying a uranium-235 fuelled nuclear reactor. Several other countries, including Germany, Russia and the UK, also had schemes for nuclear locos. But public concerns about safety could not be overcome and nuclear trains were never built. The $1.2m price tag of Borst’s train didn’t help either.
Nuclear nightmare Ford’s Nucleon car thankfully never got past the concept stage.
In the late 1950s, meanwhile, there were at least four theoretical nuclear-powered “concept cars”: the Ford Nucleon, the Studebaker Packard Astral, the Simca Fulgur and the Arbel Symétric. Based on the assumption that nuclear reactors would get much smaller over time, it was felt that such a car would need relatively light radiation shielding. I certainly wouldn’t have wanted to take one of those for a spin; in the end none got beyond concept stage.
Perhaps the real success story of nuclear propulsion has been in space
But perhaps the real success story of nuclear propulsion has been in space. Between 1967 and 1988, the Soviet Union pioneered the use of fission reactors for powering surveillance satellites, with over 30 nuclear-powered satellites being launched during that period. And since the early 1960s, radioisotopes have been a key source of energy in space.
Driven by the desire for faster, more capable and longer duration space missions to the Moon, Mars and beyond, China, Russia and the US are now investing significantly in the next generation of nuclear reactor technology for space propulsion, where solar or radioisotope power will be inadequate. Several options are on the table.
One is nuclear thermal propulsion, whereby energy from a fission reactor heats a propellant fuel. Another is nuclear electric propulsion, in which the fission energy ionizes a gas that gets propelled out the back of the spacecraft. Both involve using tiny nuclear reactors of the kind used in submarines, except they’re cooled by gas, not water. Key programmes are aiming for in-space demonstrations in the next 5–10 years.
Where next?
Many of the first ideas for nuclear-powered transport were dreamed up little more than a decade after the first self-sustaining chain reaction. The appeal was clear: compared to other fuels, nuclear power has a high energy density and lasts much longer. It also has zero carbon emissions. Nuclear power must have seemed a panacea for all our energy needs – using it for cars and planes must have seen an obvious next step.
However, there are major safety issues to address when nuclear sources are mobilized, from protecting passengers and crew, to ensuring appropriate safeguards should anything go wrong. And today we understand all too well the legacy of nuclear systems, from the safe disposal of spent fuel to the decommissioning of nuclear infrastructure and equipment.
We’ve struck the right balance when it comes to using nuclear power, confining it to sea-faring vessels under the watchful eye of the military
Here on Earth, I think we’ve struck the right balance when it comes to using nuclear power, confining it to sea-faring vessels under the watchful eye of the military. But as human-crewed, deep-space exploration beckons, a whole new set of issues will arise. There will, of course, be lots of technical and engineering challenges.
How, for example, will we maintain, repair and decommission nuclear-powered space craft? How will we avoid endangering crews or polluting the environment especially when craft take off? Who should set appropriate legislation – and how we do we police those rules? When it comes to space, nuclear will help us “to boldly go”; but it will also require bold regulation.