Skip to main content

Inner workings of the neutron illuminated by Jefferson Lab experiment

A cutting-edge experiment that probes the internal structure of the neutron has been done at Jefferson Lab in the US. An international collaboration used the CEBAF Large Acceptance Spectrometer (CLAS12) to study the scattering of high-energy electrons from a deuterium target. The team measured generalized parton distributions, which provide a detailed picture of how the neutron’s constituent quarks contribute to its momentum and spin. A key innovation was the use of the Central Neutron Detector, a specialized instrument enabling the direct detection of neutrons ejected from the target.

“The theory of the strong force, called quantum chromodynamics [QCD], that describes the interaction between quarks via the exchange of gluons, is too complex and cannot be used to compute the properties of bound states, such as nucleons [both protons and neutrons],” explains Silvia Niccolai, a research director at the French National Centre for Scientific Research, who proposed the idea for the new detector. “Therefore, we need to use unknown but experimentally measurable functions called generalized parton distributions that help us connect the properties of the nucleons (for instance their spin) to the dynamics of quarks and gluons.”

The parton model assumes that a nucleon contains point-like constituents called partons – which represent the quarks and gluons of QCD.  By measuring parton distributions, physicists can examine correlations between a quark’s longitudinal momentum — how much of the nucleon’s total momentum it carries — and its transverse position within the nucleon. By analyzing these relationships for varying momentum values, scientists create a tomographic-like scan of the nucleon’s internal structure.

“This experiment is important because it directly accesses the structure of the neutron,” says Gerald Miller at the University of Washington, who was not involved in the study. “A neutron [outside of a nuclei] will decay in about 15 min, so it is difficult to study. The experiment in question used a novel technique to directly examine the neutron. They measured the neutron in the final state, which required new detection techniques.”

Separating quark contributions

Protons and neutrons consist of distinct combinations of up and down quarks: up-up-down for protons and down-down-up for neutrons. Each type of quark is associated with its own set of generalized parton distributions, and the overarching aim of the experimental effort is to determine distributions for both protons and neutrons. This would enable researchers to disentangle the distributions by quark type, offering deeper insights into the contributions of individual quark flavours to the properties of nucleons.

While these distributions are vital for understanding the strong interactions within both protons and neutrons, our understanding of protons is significantly more advanced. This disparity arises from the electric charge of protons, which facilitates their interaction with other charged particles, unlike electrically neutral neutrons. Additionally, proton targets are simpler to prepare, consisting solely of hydrogen atoms. In contrast, neutron experiments target deuterium nuclei, which comprise a neutron and a proton. The interaction between these two nucleons within the nucleus complicates the analysis of scattering data in neutron experiments.

To address these problems, the CLAS12 collaboration utilized the Central Neutron Detector, which was developed at France’s Laboratory of the Physics of the Two Infinities Irène Joliot-Curie (IJCLab). This allowed them to detect neutrons ejected from the deuterium target by high-energy electrons for the first time.

By combining neutron detection with the simultaneous measurement of scattered electrons and energetic photons produced during the interactions, the team gathered comprehensive data on particle momenta. This was used to calculate the generalized parton distributions of quarks inside neutrons.

Spin alignment

The CLAS12 team used electron beams with spins aligned both parallel and antiparallel to their momentum. This configuration resulted in slightly different interactions with the target, enabling the team to investigate subtle features of the generalized parton distributions related to angular momentum. By analyzing these details, they successfully disentangled the contributions of up and down quarks to the angular momentum of the neutron.

The team believes their findings could help address the longstanding “spin crisis“. This is the large body of experimental evidence suggesting that quarks and gluons contribute far less to the total spin of nucleons than initially expected.

“The sum of both the intrinsic spin of the quarks and gluons still doesn’t add up to the total spin,” says Adam Hobart, a researcher at IJCLab who led the data analysis for this experiment. “The only missing piece to complement the intrinsic spin of the quarks and the gluons is the orbital angular momentum of the quarks.”

The team plan to do a new and more accurate experiment that will involve firing electrons at a polarized target in which the nuclear spins of the deuterium all point in the same direction. This should allow the physicists to extract all possible generalized parton distributions from the scattering data.

“More data are needed to get a fuller picture, but this experiment can be thought of as a big step in a huge experimental program that is needed to get a complete understanding,” concludes Miller. “I think that this work will clearly influence future studies. Others will try to build on this experiment to expand the kinematic reach.”

The research is described in Physical Review Letters.

Immiscible ice layers may explain why Uranus and Neptune lack magnetic poles

When the Voyager 2 spacecraft flew past Uranus and Neptune in 1986 and 1989, it detected something strange: neither of these “ice giant” planets has a well-defined north and south magnetic pole. This absence has remained mysterious ever since, but simulations performed at the University of California, Berkeley (UCB) in the US have now suggested an explanation. According to UCB planetary scientist Burkhard Militzer, the disorganized magnetic fields of Uranus and Neptune may arise from a separation of the icy fluids that make up their interiors. The theory could be tested in laboratory experiments of fluids at high pressures, as well as by a proposed mission to Uranus in the 2040s.

On Earth, the dipole magnetic field that loops from the North Pole to the South Pole arises from convection in the planet’s liquid-iron outer core. Since Uranus and Neptune lack such a dipole field, this implies that the convective movement of material in their interiors must be very different.

In 2004, planetary scientists Sabine Stanley and Jeremy Bloxham suggested that the planets’ interiors might contain immiscible layers. This separation would make widespread convection impossible, preventing a global dipolar magnetic field from forming, while convection in just one layer would produce the disorganized magnetic field that Voyager 2 observed. However, the nature of these non-mixing layers was still unexplained – hampered, in part, by a lack of data.

“Since both planets have been visited by only one spacecraft (Voyager 2), we do not have many measurements to analyse,” Militzer says.

Two immiscible fluids

To investigate conditions deep beneath Uranus and Neptune’s icy surfaces, Militzer developed computer models to simulate how a mixture of water, methane and ammonia will behave at the temperatures (above 4750 K) and pressures (above 3 x 106 atmospheres) that prevail there. The results surprised him. “One morning, I opened my laptop,” he recalls. “When I started analysing my latest simulations, I could not believe my eyes. An initially homogeneous mixture of water, methane and ammonia had separated into two distinct layers.”

The upper layer, he explains, is thin, rich in water and convecting, which allows it to generate the disordered magnetic field. The lower layer is magnetically inactive and composed of carbon, nitrogen and hydrogen. “This had never been observed before and I could tell right then that this result might allow us to understand what has been going on in the interiors of Uranus and Neptune,” he says.

A plastic polymer-like- and a water-rich layer

Militzer’s model, which he describes in PNAS, shows that the hydrogen content in the methane-ammonia mixture gradually decreases with depth, transforming into a C-N-H fluid. This C-N-H layer is almost like a plastic polymer, Militzer explains, and cannot support even a disorganized magnetic field – unlike the upper, water-rich layer, which likely convects.

A future mission to Uranus with the right instruments on board could provide observational evidence for this structure, Militzer says. “I would advocate for a Doppler imager so we can detect the planet’s natural oscillation frequencies,” he tells Physics World. Though such instruments are expensive and heavy, he says they are essential to detecting the presence of the predicted two ice layers in Uranus’ interior: “Like one can distinguish between an oboe and a clarinet, these frequencies can tell [us] about a planet’s interior structure.”

A follow-up to Voyager 2 could also reveal how the ice giants’ structures have evolved since they formed 4.5 billion years ago. Initially, their interiors would have contained only a single ice layer, and this layer would have generated a strong dipolar magnetic field with well-defined north and south poles. “Then, at some point, this ice separated into two distinct layers and their magnetic field switched from dipolar to disordered fields that we see today,” Militzer explains.

Determining when this switch occurred would help us understand not only Uranus and Neptune, but also ice giants orbiting stars other than our Sun. “The most common exoplanets discovered to date are around the same size as Uranus and Neptune, so when we observe the magnetic field of such ‘sub-Neptune’ exoplanets in the future, we might be able to say something about their age,” Militzer says.

In the near term, Militzer hopes that experimentalists will be able to test his theory in extremely-high temperatures and pressure fluid systems that mimic the proportions of elements found on Uranus and Neptune. But his long-term hopes are pinned on a new mission that could detect the predicted layers directly. “While I will have long retired when such a detection might eventually be made, I would be so happy to see it in my lifetime,” he says.

Quantum uncertainty and wave–particle duality are equivalent, experiment shows

The orbital angular momentum states of light have been used to relate quantum uncertainty to wave–particle duality. The experiment was done by physicists in Europe and confirms a 2014 theoretical prediction that a minimum level of uncertainty must always result when a measurement is made on a quantum object – regardless of whether the object is observed as a wave, as a particle, or anywhere in between.

In the famous double-slit thought experiment, quantum particles such as electrons are fired one-by-one at two adjacent slits in a barrier. As time progresses, an interference pattern will build up on a detector behind the barrier. This is an example of wave–particle duality in quantum mechanics, whereby each particle travels through both slits as a wave that interferes with itself. However, if the trajectories of the particles are observed such that it is known which slit each particle travelled through, no interference pattern is seen. Since the 1970s, several different versions of the experiment have been done in the laboratory – confirming the quantum nature of reality.

Richard Feynman once described this as “a phenomenon which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery [of quantum mechanics].” This phenomenon is known as measurement uncertainty.

Partial particles

In 1979, William Wootters and his colleague Wojciech Zurek at the University of Texas at Austin showed that wave–particle duality is not a one-or-the-other phenomenon. Instead it is possible to observe partial particle and partial wave-like behaviour, with a trade-off between the two.

This echoes another baffling element of quantum mechanics, namely preparation uncertainty. This is typified by Werner Heisenberg’s uncertainty principle. This states that one cannot know the position and momentum of a quantum object beyond a certain degree of accuracy, and the more one knows about one, the more uncertain the other becomes.

Despite Feynman’s contention that quantum mechanics contains only one real mystery, however, there is no obvious theoretical connection between measurement uncertainty and preparation uncertainty. In 2014, however, Patrick Coles and colleagues at the National University of Singapore showed theoretically that the two were equivalent. This has never been experimentally demonstrated, however.

Conjugate variables

In the new work, Guilherme Xavier at colleagues at Linköping University in Sweden set out to test the relationship between the visibility and the distinguishability of opposite states – which according to Coles’ predictions should be conjugate variables analogous to position and momentum. They sent highly attenuated, mostly single-photon laser pulses in two possible orthogonal orbital angular momentum states down an optical fibre to an input beamsplitter. Photons with opposite angular momenta emerged through different output fibres.

The researchers then used a phase modulator to add a variable phase delay to photons travelling down one of the paths. They then directed the paths to meet again at a second, tunable beamsplitter.

By placing a second modulator before the tunable beamsplitter and thereby adjusting the phase with which the two paths met, it was possible to tune the extent to which the paths recombined. This allowed them to control the extent to which the second beamsplitter actually behaved as a beamsplitter.

“When the beamsplitter is fully inserted you get interference back – this corresponds to a value in the modulator of π/2,” explains Xavier, “When you have zero in the modulator the upper path will always go to one detector and the lower path will always go to the other.”

Fixed lower bound

This latter case corresponds to a particle picture, but it provides no information about which path a particular particle has taken through the detector. The only way one can obtain that information is to prevent one of the polarizations of light from entering the second beamsplitter completely – the equivalent of blocking one of the slits in the double slit experiment. However, in this case, half of the photons are never detected at all. There is thus an unbeatable trade-off between distinguishability and visibility.  They found that, no matter what they chose as the phase, there was a fixed lower bound on the measurement uncertainty  that was consistent with the theory presented in 2014 by Coles and colleagues.

The Linköping team plans to develop practical applications of its technology. “We can change the settings quite fast,” says Xavier, “so our goal is to look at the implementation of some actual quantum communication protocols using these kinds of measurements – we are looking at some delayed choice experiments based on this setup.”

Theoretical physicist Jonas Maziero of the Federal University of Santa Maria in Brazil is impressed by the work. “The experiment is innovative, it’s precise, it agrees very well with theory and it confirms an important result that’s been in the literature for more than ten years now,” he says.

He cautions, however, that the work does not fully confirm Coles’ predictions. “The result reported [by Xavier and colleagues] applies to distinguishability based complementarity-based relations that use which-path detectors to quantify the particle-like behaviour of the quantum system. There are others based on predictability and using entanglement that are not contained within this framework.” Extending the research to try to cover all cases would be interesting follow-up work, he says.

The research is described in Science Advances.

Space agency leaders express fears and hopes for the future

“The world is more volatile, the world is more unpredictable, and in many respects the world is a more dangerous place than it has been for a long time.”

In his opening speech at the 20th Appleton Space Conference on 5 December, UK Space Agency (UKSA) deputy chief executive Chris White-Horne seemed determined to out-gloom the leaden skies above the ESA conference centre in Harwell, Oxfordshire. Speaking to an audience of academics and industry professionals, White-Horne ticked off a long list of ways that this more dangerous world might affect the space sector and the people who rely on it.

“We have built an almost insidious dependence on space,” he observed. Severe space weather, accidents, system failures or deliberate damage by an adversary could all trigger a loss of satellite-based position, navigation and timing services. Even a single day without modern essentials like GPS would wreak havoc on the economy, while a longer outage would be devastating. “A day without space is just the beginning,” he warned, adding that the real challenge would start on the second or third day, when supply chains would be disrupted worldwide. “We saw in COVID how very fragile some of these systems are.”

While some might prefer to leave contingency planning to military officials, White-Horne argued that the vulnerability of space infrastructure makes it a challenge for the entire sector – government, academia, and manufacturers and operators of space systems and applications alike. “Very few people can say, ‘It’s not my problem’,” he said.

A changing sector

In his keynote speech later in the day, White-Horne’s boss, UKSA chief executive Paul Bate, struck a more hopeful note by focusing on changes in the space sector since 2004, when the first Appleton Space Conference was held. In that year, the world managed just 54 orbital launches, including 18 by Russia and 16 by the US. By 2024, the number had risen to 225 – and counting. This figure includes 118 launches by a private company, SpaceX, which did not achieve its first orbit until 2008. “How we get into space has changed dramatically,” Bate said.

Photo of Paul Bate standing at a lectern in front of a large image of people holding raised hands against a sunlit backdrop. Audience members are visible in front of him, and Sarah Beardsley is standing off to one side

Another positive change Bate highlighted is the industry’s demographics. At the start of the conference, Sarah Beardsley, who leads the Rutherford Appleton Laboratory’s space division (STFC RAL Space), displayed a photo of the organizers of the first Appleton Space Conference. The photo showed a smiling group of around a dozen men in dark suits and ties. “We let women in now,” she quipped, to general laughter.

The UKSA’s own demographics bear this out. According to Bate, 46% of the agency’s staff are women, while a fifth come from ethnic minorities. Still, Bate, who is white, acknowledged that the agency needs to do more to attract diverse talent to higher-level roles: “I spend time in far too many meetings with people who look just like me.”

Taken as a whole, Bate said that the UK space sector remains 86% white and 64% male, while the percentage of space-sector workers who were eligible for free school meals as children is half the national average. While some may see this as irrelevant, Bate argued that the opposite is true. Space, he said, is “a team sport” that needs to draw talent from everywhere, and its leaders must embrace diversity of thought and experience if they want to solve big, difficult problems. “It’s very tempting to see science as aloof from societal change,” he said. “The opposite is true.”

Laser beam casts a shadow in a ruby crystal

Particles of light – photons – are massless, so they normally pass right through each other. This generally means they can’t cast a shadow. In a new work, however, physicist Jeff Lundeen of the University of Ottawa, Canada and colleagues found that this counterintuitive behaviour can, in fact, happen when a laser beam is illuminated by another light source as it passes through a highly nonlinear medium. As well as being important for basic science, the work could have applications in laser fabrication and imaging.

The light-shadow experiment began when physicists led by Raphael Akel Abrahao sent a high-power beam of green laser light through a cube-shaped ruby crystal. They then illuminated this beam from the side with blue light and observed that the beam cast a shadow on a piece of white paper. This shadow extended through an entire face of the crystal. Writing in Optica, they note that “under ordinary circumstances, photons do not interact with each other, much less block each other as needed for a shadow.” What was going on?

Photon-photon interactions

The answer, they explain, boils down to some unusual photon-photon interactions that take place in media that absorb light in a highly nonlinear way. While several materials fit this basic description, most become saturated at high laser intensities. This means they become more transparent in the presence of a strong laser field, producing an “anti-shadow” that is even brighter than the background – the opposite of what the team was looking for.

What they needed, instead, was a material that absorbs more light at higher optical intensities. Such behaviour is known as “reverse saturation of absorption” or “saturable transmission”, and it only occurs if four conditions are met. Firstly, the light-absorbing system needs to have two electronic energy levels: a ground state and an excited state. Secondly, the transition from the ground to the excited state must be less strong (technically, it must have a smaller cross-section) than the transition from the first exited state to a higher excited state. Thirdly, after the material absorbs light, neither the first nor the second excited states should decay back to other levels when the light is re-emitted. Finally, the incident light should only saturate the first transition.

Diagram showing how the green laser increases the optical absorption of the blue illuminating laser beam, alongside a photo of the setup

That might sound like a tall order, but it turns out that ruby fits the bill. Ruby is an aluminium oxide crystal that contains impurities of chromium atoms. These impurities distort its crystal lattice and give it its familiar red colour. When green laser light (532 nm) is applied to ruby, it drives an electronic transition from the ground state (denoted 4A2) to an excited state 4T2. This excited state then decays rapidly via phonons (vibrations of the crystal lattice) to the 2E state.

At this point, the electrons absorb blue light (450 nm) and transition from 2E to a different excited state, denoted 2T1. While electrons in the 4A2 state could, in principle, absorb blue light directly, without any intermediate step, the absorption cross-section of the transition from 2E to 2T1 is larger, Abrahao explains.

The result is that in the presence of the green laser beam, the ruby absorbs more of the illuminating blue light. This leaves behind a lower-optical-intensity region of blue illumination within the ruby – in other words, the green laser beam’s shadow.

Shadow behaves like an ordinary shadow

This laser shadow behaves like an ordinary shadow in many respects. It follows the shape of the object (the green laser beam) and conforms to the contours of the surfaces it falls on. The team also developed a theoretical model that predicts that the darkness of the shadow will increase as a function of the power of the green laser beam. In their experiment, the maximum contrast was 22% – a figure that Abrahao says is similar to a typical shadow on a sunny day. He adds that it could be increased in the future.

Lundeen offers another way of looking at the team’s experiment. “Fundamentally, a light wave is actually composed of a hybrid particle made up of light and matter, called a polariton,” he explains. “When light travels in a glass or crystal, both aspects of the polariton are important and, for example, explain why the wave travels more slowly in these media than in vacuum. In the absence of either part of the polariton, either the photon or atom, there would be no shadow.”

Strictly speaking, it is therefore not massless light that is creating the shadow, but the material component of the polariton, which has mass, adds Abrahao, who is now a postdoctoral researcher at Brookhaven National Laboratory in the US.

As well as helping us to better understand light-matter interactions, Abrahao tells Physics World that the experiment “could also come in useful in any device in which we need to control the transmission of a laser beam with anther laser beam”. The team now plans to search for other materials and combinations of wavelengths that might produce a similar “laser shadow” effect.

What physics metaphor do you think needs to be experimentally verified?

A few months ago, I received an e-mail from Mike Wilson, a professor of mathematics at the University of Vermont, which challenged my use of a physics metaphor. He found it in my 1986 book The Second Creation: Makers of the Revolution in 20th-Century Physics, where my co-author Charles Mann and I explained how accelerators slam particles into targets inside detectors and track fragments for clues about their structure. In a parenthetical remark, we likened this process “to firing a gun at a watch to see what is inside”.

Wilson was dubious. “Has anyone ever tried that?” he asked. We had supposed that, in principle, one could “reverse engineer” the watch by applying conservation of momentum to the debris. But Wilson wondered if you could really deduce a watch’s internal structure from such pieces. Mann and I hadn’t done the watch experiment, nor had we any intention to. Why bother? We’d painted an imaginable picture.

Wilson was unconvinced. “Such experiments,” he wrote, “could give a valuable check on the confidence we put in physicists’ statements about what goes on inside atoms”. His remark made me wonder if other physics metaphors could withstand empirical verification. I first thought of the one often wheeled out to explain the Higgs field and the Higgs boson. It was devised in 1993 by David Miller, a physicist at University College London, after the then UK science minister William Waldegrave promised a bottle of champagne for the best explanation of the Higgs boson on a single A4 sheet of paper (Physics World June 2024 p27).

The metaphor, which Peter Higgs admitted was the least objectionable of all those posited to describe his eponymous boson, begins with a room full of political-party workers. If a person nobody knows walks through, people keep their same positions – that’s like a massless boson. But when a celebrity walks through (Miller envisaged ex-British prime minister Margaret Thatcher), people cluster around that person, who then has to move more slowly – that’s like being massive.

I wonder what would have happened if the Higgs-boson metaphor were empirically tested using different kinds of celebrities

Don Lincoln, a physicist at Fermilab in the US, once made an animated video of this metaphor. Attempting to make it more palatable to physicists, he cast Higgs as the entrant, but the video nevertheless posts the disclaimer “ANALOGY!” Still, I wonder what would have happened if Waldegrave had empirically tested Miller’s metaphor using different kinds of celebrities.

Claim to fame

I’ve come within about two metres of several celebrities: filmmaker Spike Lee and actor Denzel Washington (I was an extra in a scene in their movie Malcolm X); jazz musician Sun Ra (I emceed one of his concerts); and Mia Farrow and Stephen Sondheim (I sat next to them in a club). The vibe in the room was very different in each case – sometimes with worshippers, sometimes with autograph hounds, and sometimes with people holding back at an awed and respectful distance. If hadronic mass depended on the vibe in the room, the universe would be a quite different place.

Gino Elia, a graduate philosophy student at Stony Brook University, ticked off a few other untested metaphors. He told me how Blake Stacey, a physicist at the University of Massachusetts, Boston, once described non-overlapping probability distributions as relatives staying away at Thanksgiving. In Drawing Theories Apart, David Kaiser – a science historian at the Massachusetts Institute of Technology – pictured the complementary variables of energy and time “as a kid running out of the classroom when the lights are off (breaking conservation of energy) and the kid being in their seat when the teacher turns the light back on”.

The grandest, most extended, and awe-inspiring metaphor I have ever come across is at the start of chapter 20 of Leo Tolstoy’s War and Peace, which describes Moscow just before its occupation by Napoleon’s forces. “It was empty,” Tolstoy writes, “in the sense that a dying queenless hive is empty”. The beekeeper sees only “hundreds of dull, listless, and sleepy shells of bees.” They have almost all perished, reeking of death. “Only a few of them still move, rise, and feebly fly to settle on the enemy’s hand, lacking the spirit to die stinging him; the rest are dead and fall as lightly as fish scales,” Tolstoy concludes.

I don’t know a thing about beehives, but Tolstoy did because he was a beekeeper. Even if he didn’t, I don’t care. The metaphor worked for me, vivid and compelling.

The critical point

Early in 1849 the British poet Matthew Arnold published a poem entitled “The Forsaken Merman”, in which the merman, the king of the sea, has married an earthly woman. At one point, she is at her spinning wheel when she remembers her former world. The “shuttle falls” from her hand as she decides to leave him. An alert friend – fellow poet Arthur Clough – wrote to Arnold that a shuttle is used in weaving and Arnold surely meant spindle.

Arnold realized Clough was right, insisted his publishers revise the poem, and when it was republished a quarter-century later it read that the “spindle drops” from the woman’s hand. While Arnold wrote to Clough that he had a “great poetical interest” in both weaving and spinning, he admitted apologetically that his error was due to a “default of experience”.

That flabberghasted me. Arnold writes a poem about a merman and then worries about the difference between a shuttle and a spindle? Furthermore, the person who picked it up was a fellow poet, not a weaver or spinster? Arnold’s public seem not to have noticed the error – there is no record of anybody complaining – and only his poet-friend did? More importantly, does any of this really matter?

Love is not a rose – despite what Robert Burns or Neil Young might have claimed. Nor is a man a wolf – despite the ancient Latin proverb. So if it’s acceptable to use incorrect metaphors in literature and music, then why not in physics? Are they any less effective? E-mail me your favourite physics metaphors and let me know if they have been empirically tested and why it matters. I’ll write about your responses in a future column.

Virtual patient populations enable more inclusive medical device development

Medical devices are thoroughly tested before being introduced into the clinic. But traditional testing approaches do not fully account for the diversity of patient populations. This can result in the launch to market of devices that may underperform in some patient subgroups or even cause harm, with often devastating consequences.

Aiming to solve this challenge, University of Leeds spin-out adsilico is working to enable more inclusive, efficient and patient-centric device development. Launched in 2021, the company is using computational methods pioneered in academia to revolutionize the way that medical devices are developed, tested and brought to market.

Sheena Macpherson, adsilico’s CEO, talks to Tami Freeman about the potential of advanced modelling and simulation techniques to help protect all patients, and how in silico trials could revolutionize medical device development.

What procedures are required to introduce a new medical device?

Medical devices currently go through a series of testing phases before reaching the market, including bench testing, animal studies and human clinical trials. These trials aim to establish the device’s safety and efficacy in the intended patient population. However, the patient populations included in clinical trials often do not adequately represent the full diversity of patients who will ultimately use the device once it is approved.

Why does this testing often exclude large segments of the population?

Traditional clinical trials tend to underrepresent women, ethnic minorities, elderly patients and those with rare conditions. This exclusion occurs for various reasons, including restrictive eligibility criteria, lack of diversity at trial sites, socioeconomic barriers to participation, and implicit biases in trial design and recruitment.

Sheena Macpherson

As a result, the data generated from these trials may not capture important variations in device performance across different subgroups.

This lack of diversity in testing can lead to devices that perform sub-optimally or even dangerously in certain demographic groups, with potentially life-threatening device flaws going undetected until the post-market phase when a much broader patient population is exposed.

Can you describe a real-life case of insufficient testing causing harm?

A poignant example is the recent vaginal mesh scandal. Mesh implants were widely marketed to hospitals as a simple fix for pelvic organ prolapse and urinary incontinence, conditions commonly linked to childbirth. However, the devices were often sold without adequate testing.

As a result, debilitating complications went undetected until the meshes were already in widespread use. Many women experienced severe chronic pain, mesh eroding into the vagina, inability to walk or have sex, and other life-altering side effects. Removal of the mesh often required complex surgery. A 2020 UK government inquiry found that this tragedy was further compounded by an arrogant culture in medicine that dismissed women’s concerns as “women’s problems” or a natural part of aging.

This case underscores how a lack of comprehensive and inclusive testing before market release can devastate patients’ lives. It also highlights the importance of taking patients’ experiences seriously, especially those from demographics that have been historically marginalized in medicine.

How can adsilico help to address these shortfalls?

adsilico is pioneering the use of advanced computational techniques to create virtual patient populations for testing medical devices. By leveraging massive datasets and sophisticated modelling, adsilico can generate fully synthetic “virtual patients” that capture the full spectrum of anatomical diversity in humans. These populations can then be used to conduct in silico trials, where devices are tested computationally on the virtual patients before ever being used in a real human. This allows identification of potential device flaws or limitations in specific subgroups much earlier in the development process.

How do you produce these virtual populations?

Virtual patients are created using state-of-the-art generative AI techniques. First, we generate digital twins – precise computational replicas of real patients’ anatomy and physiology – from a diverse set of fully anonymized patient medical images. We then apply generative AI to computationally combine elements from different digital twins, producing a large population of new, fully synthetic virtual patients. While these AI-generated virtual patients do not replicate any individual real patient, they collectively represent the full diversity of the real patient population in a statistically accurate way.

And how are they used in device testing?

Medical devices can be virtually implanted and simulated in these diverse synthetic anatomies to study performance across a wide range of patient variations. This enables comprehensive virtual trials that would be infeasible with traditional physical or digital twin approaches. Our solution ensures medical devices are tested on representative samples before ever reaching real patients. It’s a transformative approach to making clinical trials more inclusive, insightful and efficient.

In the cardiac space, for example, we might start with MRI scans of the heart from a broad cohort. We then computationally combine elements from different patient scans to generate a large population of new virtual heart anatomies that, while not replicating any individual real patient, collectively represent the full diversity of the real patient population. Medical devices such as stents or prosthetic heart valves can then be virtually implanted in these synthetic patients, and various simulations run to study performance and safety across a wide range of anatomical variations.

How do in silico trials help patients?

The in silico approach using virtual patients helps protect all patients by allowing more comprehensive device testing before human use. It enables the identification of potential flaws or limitations that might disproportionately affect specific subgroups, which can be missed in traditional trials with limited diversity.

This methodology also provides a way to study device performance in groups that are often underrepresented in human trials, such as ethnic minorities or those with rare conditions. By computationally generating virtual patients with these characteristics, we can proactively ensure that devices will be safe and effective for these populations. This helps prevent the kinds of adverse outcomes that can occur when devices are used in populations on which they were not adequately tested.

Could in silico trials replace human trials?

In silico trials using virtual patients are intended to supplement, rather than fully replace, human clinical trials. They provide a powerful tool for both detecting potential issues early and also enhancing the evidence available preclinically, allowing refinement of designs and testing protocols before moving to human trials. This can make the human trials more targeted, efficient and inclusive.

In silico trials can also be used to study device performance in patient types that are challenging to sufficiently represent in human trials, such as those with rare conditions. Ultimately, the combination of computational and human trials provides a more comprehensive assessment of device safety and efficacy across real-world patient populations.

Will this reduce the need for studies on animals?

In silico trials have the potential to significantly reduce the use of animals in medical device testing. Currently, animal studies remain an important step for assessing certain biological responses that are difficult to comprehensively model computationally, such as immune reactions and tissue healing. However, as computational methods become increasingly sophisticated, they are able to simulate an ever-broader range of physiological processes.

By providing a more comprehensive preclinical assessment of device safety and performance, in silico trials can already help refine designs and reduce the number of animals needed in subsequent live studies.

Ultimately, could this completely eliminate animal testing?

Looking ahead, we envision a future where advanced in silico models, validated against human clinical data, can fully replicate the key insights we currently derive from animal experiments. As these technologies mature, we may indeed see a time when animal testing is no longer a necessary precursor to human trials. Getting to that point will require close collaboration between industry, academia, regulators and the public to ensure that in silico methods are developed and validated to the highest scientific and ethical standards.

At adsilico, we are committed to advancing computational approaches in order to minimize the use of animals in the device development pipeline, with the ultimate goal of replacing animal experiments altogether. We believe this is not only a scientific imperative, but an ethical obligation as we work to build a more humane and patient-centric testing paradigm.

What are the other benefits of in silico testing?

Beyond improving device safety and inclusivity, the in silico approach can significantly accelerate the development timeline. By frontloading more comprehensive testing into the preclinical phase, device manufacturers can identify and resolve issues earlier, reducing the risk of costly failures or redesigns later in the process. The ability to generate and test on large virtual populations also enables much more rapid iteration and optimization of designs.

Additionally, by reducing the need for animal testing and making human trials more targeted and efficient, in silico methods can help bring vital new devices to patients faster and at lower cost. Industry analysts project that by 2025, in silico methods could enable 30% more new devices to reach the market each year compared with the current paradigm.

Are in silico trials being employed yet?

The use of in silico methods in medicine is rapidly expanding, but still nascent in many areas. Computational approaches are increasingly used in drug discovery and development, and regulatory agencies like the US Food and Drug Administration are actively working to qualify in silico methods for use in device evaluation.

Several companies and academic groups are pioneering the use of virtual patients for in silico device trials, and initial results are promising. However, widespread adoption is still in the early stages. With growing recognition of the limitations of traditional approaches and the power of computational methods, we expect to see significant growth in the coming years. Industry projections suggest that by 2025, 50% of new devices and 25% of new drugs will incorporate in silico methods in their development.

What’s next for adsilico?

Our near-term focus is on expanding our virtual patient capabilities to encompass an even broader range of patient diversity, and to validate our methods across multiple clinical application areas in partnership with device manufacturers.

Ultimately, our mission is to ensure that every patient, regardless of their demographic or anatomical characteristics, can benefit from medical devices that are thoroughly tested and optimized for someone like them. We won’t stop until in silico methods are a standard, integral part of developing safe and effective devices for all.

Africa targets 2035 start date for synchrotron construction

Officials at the African Light Source (AfLS) Foundation are targeting 2035 as the start of construction for the continent’s first synchrotron light source. On 9 December the foundation released its “geopolitical” conceptual design report, which aims to encourage African leaders to pledge the $2bn that will be needed to build and then operate the facility for a decade.

There are more than 50 synchrotron light sources around the world, but Africa is the only habitable continent without one. These devices use magnets to accelerate electrons in a circular ring to near the speed of light, which then emit intense beams of synchrotron radiation. The X-rays are used to study the structure and properties of matter.

Scientists in Africa have been agitating for a light source on the continent for decades, with the idea for an African synchrotron having been discussed since at least 2000. In 2018 the African Union’s executive council called on its member states to support a pan-African synchrotron and the following year Ghanaian president Nana Addo Dankwa Akufo-Addo began championing the project.

The new 388-page report, which has over 120 contributors from around the world, lays out a comprehensive case for a dedicated synchrotron in Africa, stating it is “simply not tenable” for the continent to not have one. Such a facility would bring many benefits to Africa, ranging from capacity building and driving innovation to financial returns. It cites a 2021 study of the UK’s £1.2bn Diamond Light Source, which essentially paid for itself after just 13 years.

“Without its own synchrotron facility, Africa will be left further behind at a corresponding accelerated rate and will be almost impossible to catch up to the rest of the world,” says Sekazi Mtingwa, a US-based theoretical high-energy physicist. Mtingwa is one of the founders of the South-Africa-based AfLS Foundation and editor-in-chief of the report.

The 2035 date is far away and gives us time to convince African governments

Simon Connell

The AfLS Foundation believes its report will persuade African governments to back the initiative. “The 2035 date is far away and gives us time to convince African governments,” Simon Connell, chair of the AfLS Foundation, told Physics World. He says it wants the funding to “predominantly come from African governments” rather than international grants. “The grant-funded situation is bedevilled by [the question of] where the next grant will come from,” he says.

Yet financial support will not be easy. Some have questioned whether Africa can afford a synchrotron given the lack of R&D funding in African countries. In 2007 African Union member states committed to spending 1% of their gross domestic product on R&D, but the continent still spends only 0.42%.

John Mugabe, a professor of science and innovation policy at the University of Pretoria in South Africa, notes that the light source is not even mentioned in the African Union’s science plans or in the science, technology and innovation initiatives of the G20, an international forum of 20 countries. “I do not think that there is adequate African political backing for the initiative,” he says.

However, a boost for the AfLS came on 12 December when the African Academy of Sciences (AAS), which is based in Nairobi, Kenya, and had been pushing for its own light source – the African Synchrotron Initiative – signed a memorandum of understanding with the AfLS to co-develop a synchrotron.

“[This] is a pivotal milestone in the continental effort to establish major infrastructures for frontier science in Africa,” says Nkem Khumbah, head of STI policy and partnerships at the AAS.

From physics to filmmaking: Mark Levinson on his new documentary, The Universe in a Grain of Sand

In this episode of Physics World Stories, host Andrew Glester interviews Mark Levinson, a former theoretical particle physicist turned acclaimed filmmaker, about his newest work, The Universe in a Grain of Sand. Far from a conventional documentary, Levinson’s latest project is a creative work of art in its own right – a visually rich meditation on how science and art both strive to make sense of the natural world.

Drawing from his background in theoretical physics and his filmmaking successes, such as Particle Fever (2013) and The Bit Player (2018), Levinson explores the shared language of creativity that unites these two domains. In The Universe in a Grain of Sand, he weaves together conversations with leading figures at the interface of art and science, with evocative imagery and artistic interpretations of nature’s mysteries.

Listen to the episode for a glimpse into the mind of a filmmaker who continues to expand the boundaries of science storytelling. For details on how to watch the film in your location, see The Universe in a Grain of Sand website.

Generative AI has an electronic waste problem, researchers warn

The rising popularity of generative artificial intelligence (GAI), and in particular large language models such as ChatGPT, could produce a significant surge in electronic waste, according to new analyses by researchers in Israel and China. Without mitigation measures, the researchers warn that this stream of e-waste could reach 2.5 million tons (2.2 billion kg) annually by 2030, and potentially even more.

“Geopolitical factors, such as restrictions on semiconductor imports, and the trend for rapid server turnover for operational cost saving, could further exacerbate e-waste generation,” says study team member Asaf Tzachor, who studies existential risks at Reichman University in Herzliya, Israel.

GAI or Gen AI is a form of artificial intelligence that creates new content, such as text, images, music, or videos using patterns it has learned from existing data. Some of the principles that make this pattern-based learning possible were developed by the physicist John Hopfield, who shared the 2024 Nobel Prize for Physics with computer scientist and AI pioneer Geoffrey Hinton. Perhaps the best-known example of Gen AI is ChatGPT (the “GPT” stands for “generative pre-trained transformer”), which is an example of a Large Language Model (LLM).

While the potential benefits of LLMs are significant, they come at a price. Notably, they require so much energy to train and operate that some major players in the field, including Google and ChatGPT developer OpenAI, are exploring the possibility of building new nuclear reactors for this purpose.

Quantifying and evaluating Gen AI’s e-waste problem

Energy use is not the only environmental challenge associated with Gen AI, however. The amount of e-waste it produces – including printed circuit boards and batteries that can contain toxic materials such as lead and chromium – is also a potential issue. “While the benefits of AI are well-documented, the sustainability aspects, and particularly e-waste generation, have been largely overlooked,” Tzachor says.

Tzachor and his colleagues decided to address what they describe as a “significant knowledge gap” regarding how GAI contributes to e-waste. Led by sustainability scientist Peng Wang at the Institute of Urban Environment, Chinese Academy of Sciences, they developed a computational power-drive, material flow analysis (CP-MFA) framework to quantify and evaluate the e-waste it produces. This involved modelling the computational resources required for training and deploying LLMs, explains Tzachor, and translating these resources into material flows and e-waste projections.

“We considered various future scenarios of GAI development, ranging from the most aggressive to the most conservative growth,” he tells Physics World. “We also incorporated factors such as geopolitical restrictions and server lifecycle turnover.”

Using this CP-MFA framework, the researchers estimate that the total amount of Gen AI-related e-waste produced between 2023 and 2030 could reach the level of 5 million tons in a “worst-case” scenario where AI finds the most widespread applications.

A range of mitigation measures

That worst-case scenario is far from inevitable, however. Writing in Nature Computational Science, the researchers also modelled the effectiveness of different e-waste management strategies. Among the strategies they studied were increasing the lifespan of existing computing infrastructures through regular maintenance and upgrades; reusing or remanufacturing key components; and improving recycling processes to recover valuable materials in a so-called “circular economy”.

Taken together, these strategies could reduce e-waste generation by up to 86%, according to the team’s calculations. Investing in more energy-efficient technologies and optimizing AI algorithms could also significantly reduce the computational demands of LLMs, Tzachor adds, and would reduce the need to update hardware so frequently.

Another mitigation strategy would be to design AI infrastructure in a way that uses modular components, which Tzachor says are easier to upgrade and recycle. “Encouraging policies that promote sustainable manufacturing practices, responsible e-waste disposal and extended producer responsibility programmes can also play a key role in reducing e-waste,” he explains.

As well as helping policymakers create regulations that support sustainable AI development and effective e-waste management, the study should also encourage AI developers and hardware manufacturers to adopt circular economy principles, says Tzachor. “On the academic side, it could serve as a foundation for future research aimed at exploring the environmental impacts of AI applications other than LLMs and developing more comprehensive sustainability frameworks in general.”

Copyright © 2025 by IOP Publishing Ltd and individual contributors