Skip to main content

COVID-19 leads to major overhaul for radiotherapy

© AuntMinnieEurope.com

Greater use of hypofractionated dosing regimens has helped radiation oncology sites deliver radiation therapy to cancer patients in England during the COVID-19 pandemic, offsetting a massive decline in treatment sessions overall, according to a study published in Lancet Oncology.

The study from the University of Leeds with Public Health England and the Royal College of Radiologists evaluated how radiation therapy practice changed on a number of fronts during the first wave of the outbreak.

Compared with the same periods in 2019, in 2020 the number of radiotherapy courses dropped by 19.9% in April, 6.2% in May and 11.6% in June, reported Katie Spencer, a fellow in clinical oncology at Leeds Teaching Hospitals National Health Service (NHS) Trust, and colleagues. The data reflect guidelines during the pandemic and more appropriate use of radiation therapy in some key areas, such as short courses with high doses (hypofractionated) where appropriate, delays for nonurgent care, and increased use of radiation therapy as an alternative to surgery.

“Although radiotherapy activity decreased during the first wave of the pandemic, our data suggest that the overall impact of this decline is likely to be modest,” said the group. “In addition, radiotherapy appears to have mitigated against some of the indirect harms of the pandemic by maintaining curative treatment options despite the challenges facing surgical services.”

Search for value in cancer care

Spencer is a specialist on health economics and value in cancer care, particularly the use and cost-effectiveness of radiotherapy, and an experienced user of routine NHS healthcare data to understand variation and improve value and outcomes. Her team’s study was designed to evaluate the impact of the COVID-19 pandemic on radiotherapy services across the NHS in England and guidelines issued in response to the outbreak. About one-third of cancer patients typically undergo radiation therapy.

Katie Spencer

“Alongside surgery and systemic anticancer therapy, radiotherapy plays a major part both as a curative treatment and in the palliation of [localized] symptoms from advanced disease,” the authors wrote. “At the outset of the pandemic, all three treatment modalities were affected by constraints on COVID-19 testing and staff shortages.”

Researchers evaluated the use of radiation therapy delivered by all 52 NHS radiation therapy providers in England pre-pandemic and through the end of June 2020. During a lockdown period – from 23 March to 28 June – the total number of radiation treatment courses dropped by 3263 across the NHS in England, while the number of treatment appointments was down by 119,050, the authors reported.

The declines were in line with national and international guidelines for prioritizing/triaging patients and managing care during the pandemic.

“The disproportionately greater fall in treatment attendances largely reflects a rapid increase in the use of ultra-hypofractionated treatment regimens across several [tumour] sites,” the authors noted.

For example, in April 2020, researchers reported dramatically more use of a treatment regimen that specified 26 Gy in five fractions for the neoadjuvant treatment of breast cancer, as opposed to the 40 Gy in 15 fractions that was more common pre-pandemic.

“A marked increase in the use of ultra-hypofractionation in the neoadjuvant treatment of rectal cancer was also observed, with a reduction in the use of less than 2 Gy per fraction regimens,” they said.

Other key findings

The researchers also reported that compared with other age groups, they found more of a decline in treatment for people over the age of 70, which could reflect higher risk of the patients due to age and comorbidities, as well as the ability to defer care for certain conditions such as prostate cancer and nonmelanoma skin cancer.

In tumour types that typically require more immediate treatment, such as cancers of the rectum, bladder and oesophagus, the researchers observed an increase in the number of treatment courses. Spencer and colleagues suggested that this could signal the use of radiation therapy as an alternative to surgery. For example, the number of curative courses for bladder cancer rose by 143.3% in May 2020 relative to May 2019.

Areas of concern, however, include a decline in the number of palliative treatment courses and a persistent decline in radiation therapy overall in June 2020 relative to June 2019. NHS data show that the number of referrals for possible symptomatic cancer was 21% lower in June 2020 compared with the same period in 2019, the authors noted.

“New diagnoses were suppressed by 26%, which is probably a key contributor to the ongoing suppression in radiotherapy activity up to June 2020,” Spencer and colleagues wrote.

It’s possible that this trend will have an effect on outcomes as the data are followed over time.

“As COVID-19 cases again rise, these data are crucial for modelling indirect harms of the pandemic and establish a new baseline for radiotherapy treatments from which to plan for the ongoing delivery of care throughout subsequent pandemic waves and into the recovery beyond,” the authors wrote. “They also reinforce the need to address any persisting delays in cancer diagnostic pathways.”

  • This article was originally published on AuntMinnieEurope.com ©2021 by AuntMinnieEurope.com. Any copying, republication or redistribution of AuntMinnieEurope.com content is expressly prohibited without the prior written consent of AuntMinnieEurope.com.

How gravitational waves could reveal flaws in a black-hole theorem, satellites lead an agricultural revolution

The famous no-hair theorem says that black holes can only be defined in terms of three properties: mass, charge and spin. It has held up pretty well for about 50 years, but now some physicists are hoping to find evidence of violations of the theorem in gravitational waves from merging black holes. In this episode of the Physics World Weekly podcast, Jamie Bamber and Katy Clough of the UK’s University of Oxford discuss the possibility of finding deviations from the theorem, and what they could tell us about physics beyond the Standard Model.

Earth imaging satellites provide a wide range of information about agriculture that is used by everyone from farmers to those who set food policies. In this episode Catherine Nakalembe of the University of Maryland in the US explains how satellite monitoring can help mitigate the effects of drought and other climate and land use variations.

Multiplying light signals could give optical computers a boost

Researchers in Russia and the UK have proposed a new and simple way to produce binary output signals in the logic gates of optical computers. Developed by Nikita Stroev at the Skolkovo Institute of Science and Technology in Russia and Natalia Berloff at the UK’s University of Cambridge the technique involves multiplying the input signals to the gates, instead of adding them linearly. With further improvements, their approach could drastically reduce the number of light signals required for optical computers to operate, improving their potential for complex problem solving.

Optical computers are emerging solution to the limitations of conventional electronic devices. Not only could they enable information to travel much faster through their component circuits; they should also have a far lower energy consumption and allow information to be processed in new ways that would be much more efficient at solving certain problems.

Instead of electrical signals, optical computers use the continuous phase of photons to encode and distribute binary information. A big challenge in building optical computers is that photons normally do not interact with each other, making it difficult to create logic gates. One way of making photons interact is to use materials with nonlinear refractive indices. When two or more input optical signals are combined in such materials, they interact with each other via the material’s electrons. By carefully engineering these interactions, researchers could build devices in which the two signals add together to deliver the desired output — and least in principle

Unpredictable phase

In practice, however, nonlinear materials can have unpredictable effects on the phases of output signals. To deal with this problem an external resonant excitation is introduced to set the phases of output photons to clear binary states, which is not an ideal solution.

Stroev and Berloff explored a more robust approach in their study. Instead of adding the input signals linearly, they combined them by multiplying their wave functions. Under appropriate conditions, the researchers calculate that the phases of each coupled signal changed to reach a minimum energy configuration. This produced an output signal with a phase clearly associated with either a 0 or a 1, without any need for additional signals.

The duo’s model system used polaritons: quasiparticles that form through a strong coupling between light and matter, giving them hybrid physical properties. In their design, they multiplied the wavefunctions of coherent, superfast polariton pulses, guiding them towards the correct output phase by temporarily altering their coupling strengths.

The inherent noise in the signals means that further improvements will be needed before the system can be integrated into the large-scale production of optical computers. However, the early success of Stroev and Berloff’s approach reveals a promising new route towards superfast, real-world problem solving, in areas too complex for conventional computers to handle.

The research is described in Physical Review Letters.

Iridium in undersea crater confirms asteroid wiped out the dinosaurs

Strong evidence that the dinosaurs were killed-off 66 million years ago by an asteroid hitting Earth has been found in Chicxulub crater under the Gulf of Mexico. An international team has measured an abundance of the rare element iridium in the crater and similarly high concentrations of the element are known to occur in sediments laid down at the time of the Cretaceous–Paleogene boundary (K–Pg) extinction event, which saw many species on Earth vanish.

Measuring 200 km across, the Chicxulub crater is believed to have been created by an 11 km-wide asteroid crashing into Earth. The impact would have sent vast amounts of vaporized rock into the atmosphere, blocking out the Sun and creating a winter that could have lasted several decades. The result, scientists believe, was the mass extinction of 75% of species on Earth including the non-flying dinosaurs.

The crater was discovered in the 1990s, but the idea that the K–Pg extinction was caused by an asteroid impact was proposed a decade earlier by a team that included the physics Nobel laureate Luis Alvarez. They found an unusually high amount of iridium in sedimentary rocks laid down at the K–Pg boundary. Iridium is rare in the Earth’s crust because it is a siderophile, which means that it dissolves in iron and therefore tends to sink into the Earth’s core. Iridium is much more abundant in asteroids, leading Alvarez and colleagues to conclude that the vaporization of an asteroid released large amounts of iridium into the atmosphere, which then fell to the ground as dust as the dinosaurs disappeared.

Huge tsunamis

As well as the subsequent discovery of the Chicxulub crater, the impact extinction theory is backed up by evidence that huge tsunamis occurred in the Gulf of Mexico and Caribbean regions at the time. However, the evidence linking the Chicxulub impact to the K–Pg extinction is not conclusive. The iridium could have been put into the atmosphere by another asteroid impact or impacts; and some scientists have suggested that increased volcanic activity, rather than an asteroid, could have caused the extinction.

In 2016, Sean Gulick at the University of Texas at Austin and Joanna Morgan of Imperial College London led an international team of scientists on the International Ocean Discovery Program on an expedition to the Chicxulub crater. They took about 900 m of rock core samples and found a similar spike in iridium content in sediment laid down just after the crater was formed. Indeed, the sedimentary rock containing iridium is so thick that they were able to date the dust to about two decades after the impact.

Similar abundances

Studies of the cores also revealed high levels of several other elements associated with asteroids and have been found in similar abundances in K–Pg sediments at 52 sites around the world.

“We are now at the level of coincidence that geologically doesn’t happen without causation,” says Gulick. “It puts to bed any doubts that the iridium anomaly [in the geologic layer] is not related to the Chicxulub crater.”

A separate study of the cores done in 2019 reveals that the crater rock has been depleted of sulphur when compared to surrounding limestone. This suggests that the impact blew large amounts of sulphur into the air, where it would have contributed to the cooling and then fallen as acid rain – making the situation on the ground even worse.

The research is described in Science Advances.

Quantum computer captures physics of high-energy particles

Quantum theory is often portrayed as a disruptive force, complicating everything that classical physics seemed to have figured out. Now, however, physicists at Lawrence Berkeley National Laboratory (LBNL) in the US have demonstrated that the two can work side-by-side, in a proof-of-principle study that shows how a quantum computer can complement a classical method of modelling high-energy particle collisions.

Machines such as CERN’s Large Hadron Collider (LHC) smash protons together at energies of more than 1 TeV, producing showers of thousands of particles. Physicists use computer models to predict what happens to those particles by the time they reach a detector. In one such modelling technique, known as a parton shower, the assumption is that the particles that make it to the detector are the last step in a long cascade of particles and radiation that converted into one another after the initial collision.

For this parton shower to include quantum features of particle interactions, however, the model needs to simultaneously consider all possible intermediate particles that could form between the initial and final particles – something that cannot be done by a classical computer algorithm, says Christian Bauer, a theoretical physicist at LBNL and co-author of the paper. “What a classical [parton] shower does is that it sort of goes through and produces a particular event, one at a time, with a particular intermediate particle,” Bauer explains. “The quantum version of the shower sort of does all possibilities in one shot.”

In their study, which appears in Physical Review Letters, Bauer and colleagues created a quantum algorithm for the parton shower. To do this, they developed a simplified version of the Standard Model of particle physics that shares some of the full model’s features but is simple enough for present-day quantum computers to execute. They then used the IBM Q Johannesburg chip to calculate details of particle processes that can occur within this simplified model. This IBM chip has 20 superconducting quantum bits (qubits), and the LBNL scientists used cloud access to program it to run their quantum parton shower algorithm on 5 qubits, using 48 quantum gate operations. When they compared the real chip’s output to a prediction made by IBM’s quantum computer simulator, they found excellent agreement – indicating that the computer fully captured quantum effects in their particle model.

A quantum problem for a quantum machine

The idea that quantum effects are hard or impossible to model on non-quantum devices is an old one, dating back to lectures given by the physicist Richard Feynman in the early 1980s. Features of parton showers that are formulated in the language of quantum mechanics from the get-go certainly fall into this category, says Jesse Thaler, a physicist at the Massachusetts Institute of Technology who was not involved in the study. “While some aspects of particle scattering can be described in a classical language, nature is fundamentally quantum mechanical,” Thaler says. The current study, he suggests, could be a stepping-stone towards a future in which theorists use the outputs of both classical and quantum computers to piece together more complex models of what happens inside particle colliders.

The speed at which that future arrives, however, will depend on overcoming certain hardware challenges within quantum computing. “While I would be surprised if these challenges could be overcome before the end of the LHC era, it is plausible that these kinds of hybrid classical and quantum algorithms could be useful for interpreting data from a future collider,” Thaler predicts.

Two-way collaboration

Even though quantum computers are not yet advanced enough to outperform classical machines completely, Bauer thinks there are benefits in making them work together. While classical computational techniques produce excellent results in some areas of particle physics, he and his colleagues aim to concentrate on inherently quantum effects that classical machines could never properly handle. “We should only ask quantum computers to do the things that are hard to do on classical computers,” he says.

Benjamin Nachman, a physicist at LBNL and lead author of the study, adds that collaboration between high-energy physicists and quantum information scientists is a two-way street. “There are techniques for how to do [high energy] physics that we could apply to improve error mitigation on quantum computers,” he says. He and his collaborators have begun exploring some of these techniques, aided by a US Department of Energy programme that provides funding for this type of interdisciplinary partnership.

In the meantime, the LBNL team is focusing on making their current “toy” version of Standard Model physics, and the accompanying quantum algorithm, more sophisticated. “If a better [quantum] computer comes tomorrow, we can run the model that we developed here with more precision,” Bauer says. “But in order to really go to the Standard Model, there’s more theoretical work needed as well – which is not unsurmountable at all, it just needs to be done.”

3D printing perovskites onto graphene creates ultrasensitive X-ray detector

X-ray imaging plays a vital role within diagnostic medicine, with modalities such as radiography, fluoroscopy and computed tomography (CT) enabling visualization of the internal structure of the human body. But exposing the body to ionizing radiation comes with an associated risk, and the medical imaging industry is continually seeking new ways to reduce the required imaging dose, via sensitive, high-resolution detectors.

One emerging approach is to use lead halide perovskites to create high-sensitivity X-ray detectors. These materials contain heavy elements such as lead and iodine that have a large X-ray scattering cross section. And as they directly convert X-ray photons into an electrical signal, perovskites offer higher sensitivity and lower cost than competing scintillator-based detector technology. Integrating these perovskites into standard microelectronics, however, remains a challenge.

Now, researchers in Switzerland have demonstrated that 3D aerosol jet printing, a low-cost technique that deposits material with micron precision, can be used to build X-ray detectors from the perovskite methylammonium lead iodide (MAPbI3) on graphene. As they report in ACS Nano, the resulting devices exhibited a record sensitivity of 2.2 × 108 μC/Gyair/cm2 when detecting 8 keV X-ray photons at low dose rates – four orders of magnitude higher than today’s best-in-class detectors.

“By using photovoltaic perovskites with graphene, the response to X-rays has increased tremendously,” says group leader László Forró, from EPFL’s Laboratory of Physics of Complex Matter, in a press statement. “If we would use these modules in X-ray imaging, the required X-ray dose for forming an image could be decreased by more than a thousand times, decreasing the health hazard of this high-energy ionizing radiation to humans.”

Device optimization

Halide perovskites such as MAPbI3 form elongated crystallites when dissolved in a polar aprotic solvent. During aerosol jet printing, as this solution travels through the nozzle, the crystallites grow and land on the substrate as crystalline nanowires containing little solvent. Once a droplet reaches the substrate, the solvent is mostly evaporated, reducing any spatter and creating the well-defined 3D structures needed to construct an X-ray photodetector with high spatial resolution.

Aerosol jet printing

Using the aerosol jet printing device at CSEM in Neuchatel, the team first characterized the properties of different MAPbI3 geometries printed on microstructured silicon surfaces. A 3D-printed pillar showed an increased charge carrier collection compared with a 2D spot. Both 2D and 3D geometries enabled detection of low light intensities, down to 31.4 μW/cm2 under 650 nm illumination.

To further improve the photodetection characteristics, the researchers 3D-printed the perovskite on graphene, which amplifies the generated photocurrent. After testing several architectures and operation modes (resistor, diode and heterojunction) they found that the optimal pixel design was a heterojunction, with a MAPbI3 pillar printed on a graphene layer spanning the gold electrodes.

X-ray detection

To create the X-ray photodetector, the researchers fabricated a sensing chip containing 3D-printed MAPbI3 walls of about 600 μm in height. They investigated the X-ray detection properties of the fully integrated detector by exposing it to an 8 keV X-ray source for 10 s at various dose rates. The X-ray detection limit was reached at a dose rate of 0.12 μGy/s, below which the signal could not be distinguished from the background noise. At the highest dose rate examined, 358 μGy/s, the device exhibited high photocurrents of up to 4000 μA/cm2.

As the sensitivity of the device is dose-rate dependent, the team determined its sensitivity at 100 mV bias for three dose-rate regions. The sensitivity of a detector pixel (with an estimated surface of 0.075 mm2) was largest at dose rates below 1 μGy/s, reaching 2.2 × 108 μC/Gyair/cm2. Above 1 μGy/s, the photocurrent begins to saturate, yielding sensitivities of 2.5 × 107 μC/Gyair/cm2 at up to 40 μGy/s, and 2.9 × 106 μC/Gyair/cm2 from 40 to 100 μGy/s.

One obstacle preventing commercialization of perovskite-based optoelectronics is their stability. To protect the active MAPbI3 layer in their device, after wire-bonding, the researchers encapsulated the detector in a PDMS polymer. They note that the assembled detector unit was stable for over nine months, with no degradation in performance.

“We are confident that this X-ray detector unit architecture is extremely promising for highly sensitive X-ray imaging,” the researchers conclude. “This approach could allow significant lowering of the radiation doses required for X-ray imaging, resulting in safer and more affordable CT imaging systems.”

For medical imaging, the detector units will need to be assembled into a large surface area X-ray detector. The team also note that as CT scans use higher X-ray energies than the 8 keV source employed in this work, measurements should be repeated for 100 keV photons to confirm the device’s suitability for medical applications.

Ten years after Fukushima: could new fuels make nuclear power safer?

At 2.46 p.m. local time on Friday 11 March 2011 a massive earthquake hit north-eastern Japan. Striking 130 km offshore, the complex double quake measured 9.0 on the Richter scale and lasted around three minutes. In that time, a 650 km-long section of the sea floor shifted 10–20 m horizontally, while Japan moved a few metres east and the local coastline subsided half a metre.

Initially the Fukushima Daiichi nuclear power plant – located around 225 km north-east of Tokyo – survived the earthquake. But around 40 minutes later it was engulfed by a 15 m-high tsunami. The three operating reactors had turned off automatically following the earthquake, but the immense wave disabled the emergency diesel generators that were supplying back-up power to the cooling systems. With nothing to cool the reactors, their nuclear fuel cores started to overheat and melt. Powerful hydrogen explosions caused extensive damage to the reactor buildings (shown in photo above) and released radiation and radioactive material.

The worldwide nuclear community engaged in a multi-pronged effort to improve the safety of nuclear reactors

Brent Heuser, University of Illinois

Following the accident at Fukushima, members of the nuclear industry started working on advanced fuel concepts to increase accident tolerance. Their goal is to create fuels that can tolerate any potential loss of cooling in a reactor, and other adverse events, for longer than current fuels – thereby increasing safety margins. In the US, this initiative was brought together by the Department of Energy (DOE) under its Accident Tolerant Fuel Development Program. Launched in 2012, it aims to have test fuel rods in a commercial reactor by 2022.

“The worldwide nuclear community engaged in a multi-pronged effort to improve the safety of nuclear reactors, which are already very, very safe,” explains Brent Heuser, a nuclear engineer at the University of Illinois, US. “And one of those approaches was to attack a specific problem related to the behaviour of the cladding that encapsulates the uranium-dioxide fuel.”

In most commercial reactors the fuel rods consist of uranium-dioxide fuel pellets stacked inside a long cladding tube made of zirconium alloy. Used as a barrier to protect the pellets, zirconium is very resistant to corrosion under normal operation conditions, in which the fuels rods sit in water at temperatures of around 300 °C. (Being under such high pressure, the water doesn’t boil.) “It’s kind of ironic. Zirconium is one of the most reactive metals, but it performs very, very well in reactors under normal operating conditions,” says Heuser. “Its corrosion behaviour is very good. It is radiation tolerant. It is really the first engineering barrier. And the probability of cladding failure is incredibly remote.”

At very high temperatures zirconium reacts with water to produce hydrogen, which generated the explosions at Fukushima

Jonathan Cobb, World Nuclear Association

However, if there is a temperature excursion and the water coolant starts to boil, the cladding is exposed to high-temperature steam, which zirconium easily reacts with. It converts rapidly to zirconium oxide and becomes very brittle, which can make it crack and therefore no longer serve as a protective barrier.

This cladding was responsible for the explosions seen during the Fukushima incident, explains Jonathan Cobb, senior communication manager at the World Nuclear Association. “At very high temperatures zirconium reacts with water to produce hydrogen,” he says. “And that hydrogen gas is what generated the explosions that were seen at Fukushima Daiichi.”

Improved claddings

The approach the nuclear industry has settled on for the first generation of accident-tolerant fuels is to coat the cladding, to stop the zirconium from being directly exposed to high-temperature steam during accidents. This is what Heuser’s lab does – it tests chromium coatings on traditional zirconium-based alloy cladding.

“The advantage of improving the existing cladding is that you’re not starting from scratch,” says Heuser. “You’re taking something that’s been proven to work very well under normal operating conditions and you’re putting a coating on it, which represents a very minor perturbation to the normal operating parameters of a reactor, but gives you this built-in defence should a transient occur and the cladding becomes exposed to high-temperature steam.” Heuser’s team has conducted tests both under normal operating conditions and under exposure to high-temperature steam, and the chromium coatings perform well.

figure 1

Indeed, chromium-coated claddings are now starting to appear in commercial nuclear reactors. As part of the 2012 programme, the DOE contracted three companies to develop accident-tolerant fuels: Framatome, Westinghouse and GE Research. Both Framatome (a French firm) and Westinghouse (from the US) have been developing chromium-coated zirconium-alloy fuel rods. Westinghouse installed test rods featuring their advanced cladding in the Byron Nuclear Generating Station in Illinois in 2019, and the Doel Nuclear Power Station in Belgium in 2020. Meanwhile Framatome’s chromium-coated fuel rods were loaded into a reactor at the Vogtle Electric Generating Plant in Georgia in the US in early 2019. All three reactors are currently using these upgraded rods for a few fuel cycles, after which they will be examined.

Delayed reaction

The aim of accident-tolerant fuels is to delay a reaction during an accident, not prevent it entirely. “The goal is not to say, OK this has happened, we can just let nature take its course and in three days we’re going to be fine,” Heuser says. “That was never the goal. It’s completely unrealistic. The question is: how much time can we buy?” He points out that research suggests that a chromium coating can prevent the high-temperature steam reaction with the underlying zirconium cladding for a couple of hours, which could be enough time for reactor operators to fix the problem.

If you could buy yourself a few hours to get your cooling back under control, then you might save the reactor

Dave Goddard, National Nuclear Laboratory

As Dave Goddard, the technical lead for the UK’s advanced nuclear fuel programme at the National Nuclear Laboratory (NNL), puts it: “If you could buy yourself a few hours to get your cooling back under control, then you might save the reactor.” But he is unsure if this delay period for chromium-coated rods would be long enough. “A lot of the modelling that’s been done on the chromium-coating concept is suggesting that it may actually be minutes, not hours.”

A fuel rod featuring chromium-coated cladding

There are, however, other options, such as changing the cladding material completely. Silicon-carbide composites, a material more akin to a ceramic, are one potential solution currently being looked into. These materials have exceptionally good high-temperature performance. “You can probably reach up to about 800 °C with silicon-carbide cladding before you start seeing any sort of detrimental behaviour,” Goddard explains. But making these composites into 4 m-long tubes with a thin wall is challenging. Currently this is done using a chemical vapour deposition technique that can take weeks, creating problems for the economic viability of the technology. But researchers in the UK, supported by NNL, are working on improving this manufacturing technique.

Meanwhile, the US company GE Research has been developing two products: a new iron-based alloy cladding material known as IronClad, and ARMOR, a coating for zirconium cladding. The ARMOR coating is not chromium, Russ Fawcett – who manages the fuel programme at GE – explains, adding that it is a proprietary product and the company has not disclosed its composition.

As with other accident-tolerant fuel concepts, the aim is to provide time to stabilize a power plant during a severe accident. Fawcett says that in a blackout situation similar to Fukushima’s, research suggests that GE’s cladding products could buy between three and six hours. So far, they have been tested with prolonged exposure to high-temperature steam at GE Research and in reactors at the Idaho National Laboratory in the US, with both products exhibiting improvements in resistance to high-temperature steam compared to the standard zirconium-alloy cladding. In 2018 they were also installed in a commercial reactor at the Hatch Nuclear Power Plant in Georgia in the US, and in 2020 GE performed poolside examinations, which it says showed the prototypes work very well.

The company is now working with the US Nuclear Regulatory Commission to license its ARMOR product for use in nuclear power plants, while development of IronClad will continue. GE hopes to be selling the former by 2025 and the latter four or five years later.

New fuels

Accident tolerance could also potentially be improved by changing the uranium-dioxide pellets that sit inside the cladding tubes.

At Elizabeth Sooby Wood’s lab at the University of Texas at San Antonio, US, researchers are looking at advanced technology fuels. “That’s where we’re getting a better fuel economy or accident-tolerant fuels. And then some products kind of overlap both,” Sooby Wood explains.

When it comes to accident-tolerant fuel, one of the big factors Sooby Wood and her colleagues are looking at is thermal conductivity – that is how efficiently the fuel can dissipate its heat so that it doesn’t reach really high temperatures during adverse incidents. In a reactor, all fuel has to be above a critical temperature. Fuels that are less thermally conductive do not heat up evenly, leading to spots where the fuel is much hotter than the critical temperature.

There are accident scenarios where uranium silicide does perform much better

Elizabeth Soddy Wood, University of Texas at San Antonio

In contrast, fuels with a higher thermal conductivity can run at a much more even temperature in the reactors, reducing the overall temperature of the fuel and meaning it has further to go from operating temperature to melting point in an accident. The cooler fuel gives you more of a safety margin. “It’s about improving different material families so that they tolerate these abnormal scenarios better,” Sooby Wood says.

Elizabeth Sooby Wood and doctoral student Geronimo Robles

Currently her team is looking at uranium silicide as an alternative to the traditional uranium dioxide. On its own, uranium silicide is actually not very accident tolerant – if it is exposed to the reactor coolant it has a worse reaction than uranium dioxide. “But it does have a much higher thermal conductivity,” adds Sooby Wood. “So there are accident scenarios where it does perform much better. We are seeing if we can engineer the uranium silicide to withstand exposure to the coolant.”

Just as materials are added to iron to create stainless steel – a much less corrosive material than plain iron – so the researchers are adding metals to make uranium-silicide alloys. So far, they have been adding both aluminium and chromium. The resulting alloys are tested in a furnace that can go up to 1600 °C, and steam is flowed through to simulate coolant exposure of the fuel elements and the uranium compounds. Ideally there will be no reaction: the uranium-silicide alloys will not degrade or ignite. To date, the researchers have run these steam oxidization tests at temperatures from 200 °C to 1000 °C and the results have been promising, with delayed onset of reaction compared with plain uranium silicide. Now Sooby Wood’s team and others are looking at different ways to incorporate the metals to get further delays.

At this stage, the work is just fundamental research and development, questioning whether alloying works and if it can improve the behaviour of the fuels. But if successful, Sooby Wood says these new fuels could also improve fuel economy as they have a higher uranium density: more uranium per unit volume. “So, for the exact same footprint you have more fissile material. More stuff making power.” This means higher power output from reactors, many of which are currently running at peak capacity. It also leads to longer fuel cycle times.

The higher uranium density could also enable the use of more advanced cladding materials. Sooby Wood explains that some of the cladding materials being looked at have a higher neutron-capture rate than zirconium. This means that they soak up more of the neutrons that are produced and used in fission chain reactions. However, higher-density fuels produce more neutrons in the first place, making this loss less of an issue.

Added benefits

The advanced cladding materials could also improve fuel performance and provide economic benefits. For example, the chromium coatings make the cladding more durable, reducing fuel rod failures and unplanned outages, Goddard says. And GE claims that its new claddings also cut corrosion. Power plants could therefore keep the fuel rods in the reactors for longer, extending fuel cycle lengths and making them more profitable. GE hopes that the improved fuel performance, combined with a slight increase in fuel enrichment, will reduce the number of fuel assemblies a power plant needs by around 10–20%.

GE Research testing its ARMOR and IronClad accident-tolerant fuel rods

Ultimately this could also reduce fuel waste. “If your fuel can be in the reactor for longer, if you have reloads every 18 months instead of every 12 months,” explains Cobb, “then that is going to reduce the already low volumes of used fuel that comes out of the reactors.”

Some experts argue, however, that the pursuit of improved performance and economic benefits is shifting accident-tolerant fuels away from their original aims. Edwin Lyman, director of Nuclear Power Safety at the Union of Concerned Scientists in the US, says that while the idea of accident-tolerant fuels sounds good, he has been disappointed with the results so far. He adds that in the US, industry is moving away from any pretence of improved safety. “Instead, they’re focusing on trying to reduce their operating costs by increasing the burnup and the enrichment of fuels.”

Lyman says extending the coping times by a couple of hours does not make a significant difference to safety – studies suggest that these time frames have a fairly small impact on the probability of a core melt. “I’m not saying it’s of no benefit,” he adds, “but it’s a minimal benefit.”

The US nuclear industry is focusing on trying to reduce their operating costs by increasing the burnup and the enrichment of fuels

Edwin Lyman, Union of Concerned Scientists

This becomes a real issue when you try to extract more energy from the fuel. By increasing the burnup and keeping the fuel in the reactors for longer you increase the risk of fuel fragmentation and related issues, which could reduce the safety margins in a loss-of-cooling accident, like that at Fukushima. “If you just substituted one of these chromium-coated fuels and didn’t try to increase the burnup, then you might have an additional safety margin,” Lyman says. “But if you push the burnup to the limits of what the fuel can withstand, then you’re not realizing that safety benefit.”

John Allen of GE Research says that while the company will seek to recognize the other benefits of these advanced fuels, “the margin to safety will increase”. His colleague, engineer Evan Dolley, adds: “Our primary objective is to increase fuel reliability and safety, and provide more coping time. We’ve never veered from that.”

Meanwhile, Heuser believes the research and nuclear industry has found a viable solution for accident-tolerant nuclear fuels. “I think the future is bright for nuclear power. And I think it needs to be part of a balanced energy portfolio along with renewables,” he says. “We need to move away from fossil fuels because global climate change is a real problem to address and nuclear power can be and should be part of that solution.”

Positrons possess unexplored potential for cancer therapy

Spur model

Positron-emitting radionuclides have long been employed for diagnostic imaging, with PET scans using fluorine-18 (18F)-labelled fluorodeoxyglucose (FDG) playing an essential role in cancer diagnosis. But positrons could also be used to destroy cancer cells. Perhaps due to their prevalence within diagnostics, this therapeutic potential has to date been largely overlooked. A research team in Australia aims to address this oversight.

The researchers, from the University of Sydney, Royal North Shore Hospital and the Sydney Vital Translational Cancer Research Centre, demonstrated the first in vitro evidence of the therapeutic potential of positrons on prostate cancer cells. They also derived the radiobiological parameters for 18F positron emission, reporting their findings in Scientific Reports.

“We refer to it as positron emission radionuclide therapy, or PERT,” says senior author Dale Bailey.

Cell studies

When the radionuclide 18F undergoes decay it emits a positron (a beta-plus particle emitted from a proton-rich nucleus). The positron will ultimately annihilate with an electron, leading to the emission of two 0.511 MeV photons. And it is these photons that are detected to create PET images.

But before this final annihilation process, the positrons lose kinetic energy in discrete quantities (roughly 100 eV) via multiple interactions along their track, creating positron “spurs” – nano-sized spheres of electron/positive-ion pairs – and a terminal positron “blob”. These spurs and blobs are all sources of highly reactive species and deliver a relatively large radiation dose when they interact with biomolecules such as DNA.

To investigate the potential of positrons in cancer medicine, the researchers examined the survival of prostate cancer cells exposed to sodium fluoride (18F-NaF) solution for 18 h. They found that a dose of 20 Gy 18F positrons killed over 90% of the cells, while a 10 Gy dose caused 70% cell kill.

To quantify the relative biological effectiveness (RBE) of 18F positrons, the researchers compared their results with high-dose rate X-ray irradiation. They assessed cell survival at various absorbed doses for positrons and for X-rays from a small-animal radiation research platform (SARRP). By comparing the mean absorbed doses required for 50% cell survival, they calculated a mean RBE of 0.42 for 18F positrons relative to SARRP irradiation. This is three times higher than the RBE of radionuclides that emit beta-minus particles (electrons emitted from a neutron-rich nucleus), such as 90Y and 177Lu.

Takanori Hioki (left) and co-authors Yaser Gholami (centre) and Kelly McKelvey. (Courtesy: Takanori Hioki)

“Clinically speaking, the dose rate and linear energy transfer (LET) of positron emitters is expected to be higher than that of most beta-minus emitters that are currently used in radionuclide therapy, predominantly due to the relatively shorter half-life and more ionizing radiation of many positron emitters,” explains first author Takanori Hioki. “Furthermore, radionuclide therapy targets metastatic lesions, while external-beam radiotherapy is generally used for larger sites or primary lesions.”

Damage simulation

Hioki and colleagues also performed a Monte Carlo simulation of a linear DNA model to determine the frequency of DNA single strand breaks (SSBs) and double strand breaks (DSBs) caused by positron or beta-minus irradiation at kinetic energies from 250 eV to 1.5 keV. They observed that the lower energies produced larger numbers of SSBs and DSBs.

The simulation revealed that positron tracks induce 1.5- and 2.2-fold more SSBs and DSBs, respectively, than beta-minus tracks. The greatest difference occurred at 400 eV, where positrons caused 55% increase in SSBs and 117% increase in DSBs compared with beta-minus particles.

These results imply that the direct interaction of a single positron with DNA should create more lethal damage than caused by a single beta-minus. “As each positron that is emitted loses energy as it interacts, an accumulation of the simulated interactions causes the total damage that we observe in the in vitro experiment,” Hioki notes.

Plotting the LET (a measure of how much energy an ionizing particle deposits per unit path length) revealed that maximum SSB and DSB production should occur at 250 eV (the kinetic energy near the end of its track) for both positrons and electrons. At this energy, a positron has roughly 7% higher LET than a beta-minus.

The spur model suggests that beta-minuses and positrons initially have similar radiation tracks, but behave differently at the lowest energies. For a sub-keV beta-minus, the mean separation between spurs is 20 times larger than the diameter of the DNA helix, while a sub-keV positron continuously forms spurs and builds up a blob along and at the end of its track. Thus, at sub-keV energies, a positron has a higher LET than a beta-minus. Additional ionization at the terminal annihilation event further increases the total number of ionizations per positron track.

“The biggest contribution to the higher DNA damage from positrons in comparison to beta-minuses is the higher LET of the particles at sub-keV energies, as well as contributions from the difference in charge,” says Hioki.

Dual role

The researchers point out that, in addition to an untapped therapeutic potential, positron-emitting radionuclides could also play a role in emerging theranostic strategies, for use as a combined therapeutic and diagnostic agent. For clinical use, however, the highly penetrating, low-ionizing nature of the emitted annihilation photons will require careful safety considerations when administering therapeutic doses of the radioactive compound.

“As this study demonstrated the therapeutic potential of positrons, we are currently working on the next step – to optimize the administered activities to maximize its treatment efficacy,” Hioki tells Physics World. “We are also performing biological assays to demonstrate the impact that positrons have on the cellular mechanisms that lead to the results we observed in the cell survival assays.”

New superheavy isotope and excited state could point the way to islands of stability

A new isotope of darmstadtium and a new excited state of copernicium-282 have been found in the decay chains of the superheavy element flerovium by an international team of researchers. The discoveries made by Anton Såmark-Roth of Lund University in Sweden and colleagues provide important clues to nuclear physicists trying to make long-lasting superheavy elements that lie within islands of stability.

First synthesized in 2002, oganesson has an atomic number of 118 and is currently the heaviest known chemical element. Since its discovery, researchers have continued their search for even heavier elements but face a major barrier. As elements become heavier, growing imbalances between proton and neutron numbers tend to make them increasingly unstable, making it increasingly difficult for researchers to synthesize them in the lab.

This trend is reversed somewhat when nuclei contain “magic” numbers of protons or neutrons. This creates “islands of stability” in an otherwise turbulent part of the chart of nuclides – which plots nuclei in terms of their proton and neutron content. These islands could provide crucial steppingstones for researchers aiming to synthesize elements heavier than oganesson. One such island is believed to occur at around flerovium-298, which is predicted to be “doubly magic” for protons and neutrons. This isotope cannot currently be produced in the lab but is a target of great interest to nuclear physicists.

Alpha decay

In their experiment, Såmark-Roth’s and colleagues studied the decays of two lighter versions of that element: flerovium-288 and 286. These isotopes were created by firing an intense beam of calcium ions into plutonium, using the TASCA facility at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt Germany. After its formation, the flerovium decays rapidly by emitting alpha particles (helium-4 nuclei), producing unstable nuclei that go on to decay themselves.

The team studied the decay chains using high-resolution nuclear spectroscopy, which involves measuring the different types of radiation emitted by decaying nuclei. Although dozens of chains were detected, two were of particular interest. One is flerovium-288 decaying to copernicium-284, which itself decays to darmstadtium-280 – an isotope that had not been observed before. In the second decay chain, flerovium-286 decays to an excited state of copernicium-282, which contains even numbers of both protons and neutrons. Again, this had never been seen before in an excited superheavy nucleus.

The observation of these decay chains and the existence of the excited state of copernicium-282 provide important information for physicists developing theoretical models of flerovium-298 and could also point physicists in the right direction to discover islands of stability.

The research is described in Physical Review Letters.

Prototype detector uses cosmic muons to scan shipping containers

A full-scale prototype muon tomograph that can peer inside cargo containers has been created by researchers in Italy and the US. The team, led by Francesco Riggi at the University of Catania, combined layers of muon detectors with a specialized reconstruction algorithm to deliver high-resolution 3D images of a small lead block inside a large sensing area. The technology could make it far easier for cargo authorities to stop dangerous nuclear materials from being transported illegally.

Cargo containers are widely used to transport high volumes of goods and because of their robust steel construction and large size, small objects can easily be concealed inside them. As a result, there is a growing concern that containers could be used to transport illicit nuclear materials, including plutonium and uranium.

There is a need, therefore, for an efficient way of screening containers without disrupting global trade. Among the most promising emerging techniques for doing so is muon tomography, which uses the natural showers of muons created when high-energy cosmic rays collide with molecules in the upper atmosphere. When muons interact with a dense material such as uranium, they are scattered and absorbed in characteristic ways, depending on the atomic number of the material.

Egyptian pyramid

Muon showers strike all parts of the Earth and have been studied by physicists for nearly 90 years. As a result, scientists have a very good understanding of the energies, fluxes, and angular distributions of muons at moderate altitudes. By comparing these values before and after muons have passed through hidden volumes, researchers can determine both the compositions and positions of concealed materials. The technique is now being used in a growing number of research areas, and even led to the discovery of a large chamber inside an ancient Egyptian pyramid in 2017.

Muon tomography is particularly desirable because cosmic muons are uniformly available across the Earth’s surface. Furthermore, muons can penetrate far deeper into dense materials than conventional imaging methods, including X-rays. On the downside, however, because muons fluxes are relatively low, long scanning times are needed with current detector technologies.

Now, Riggi and colleagues have combined several techniques previously developed to overcome the flux problem to create a full-scale muon tomograph. Their setup features multiple layers of scintillator-based muon detectors positioned above and below the sensing area. These detectors track changes in the paths of muons as they are scattered by dense materials. Then, an algorithm analyses the muon trajectories to estimate the point of closest approach between the muons and heavy atomic nuclei. From this information, the team can create high-resolution 3D images of dense materials inside the sensing area.

The technique enabled Riggi’s team to precisely determine the 3D position of a small lead block about 20 cm across within a sensing cross-sectional area of 18 m2 – which is large enough to accommodate a standard cargo container. The team says that their successful demonstration paves the way for devices that can efficiently detect concealed nuclear materials. With further improvements to reduce scanning times, the inspection devices may one day become an integral part of cargo-handling facilities worldwide.

The research is described in the European Physical Journal Plus.

Copyright © 2026 by IOP Publishing Ltd and individual contributors