Skip to main content

How FRIB will expand our chemical universe

We could be on the cusp of the greatest single expansion of our known chemical universe in history. That’s thanks to a new accelerator in Michigan is going to double the number of known isotopes. This short video introduces the $750m Facility for Rare Isotope Beams (FRIB), which will come online at Michigan State University in 2022.

Find out more by reading ‘Biggest expansion of known chemical universe targeted by FRIB nuclear facility’, a feature originally published in the February 2021 issue of Physics World.

Biggest expansion of known chemical universe targeted by FRIB nuclear facility

“We’re going to double the number of known isotopes,” says Artemis Spyrou, an experimental nuclear physicist at the National Superconducting Cyclotron Laboratory (NSCL) in the US. “It’s crazy. We know more than 3000 isotopes at the moment. And we going to double that.”

It’s a bold mission statement, akin to the greatest single expansion of our known chemical universe in history. But that’s the lofty goal that Spyrou and others like her hope to achieve when their next-generation particle accelerator – the $730m Facility for Rare Isotope Beams (FRIB) – comes online at Michigan State University in 2022.

FRIB is just one example of a “big-science” facility – a term coined in the 1930s by the US nuclear physicist Ernest Lawrence at the University of California, Berkeley. Rather than having his team members working independently, Lawrence pooled his fellow physicists, encouraging them to collaborate and specialize. It was a philosophy that allowed the lab to build ever-bigger and more complex machines, eventually leading to the discovery of the antiproton and antineutron, as well as the creation of 15 chemical elements, none of which exist in any appreciable quantities on Earth.

At the heart of Lawrence’s work was the cyclotron – a type of particle accelerator for which he was awarded the 1939 Nobel Prize for Physics. Particles enter in the centre of the device, before being spun out in a spiral with the aid of an electromagnet, passing through two semi-circular electrodes. By alternating the voltage across the electrodes, the particles get attracted and repelled in a push-and-pull effect that accelerates them until they eventually leave the machine, where they are used for experiments.

Big science, big thinking

Lawrence’s first cyclotron was a mere 10 cm in diameter and made from basic laboratory items. By 1932 he had designed a 69 cm machine and by 1939 his entire research team could sit on the housing of a 152 cm cyclotron, which would be used to discover the elements neptunium and plutonium. After the Second World War, even bigger machines – synchrotrons – were created from the cyclotron concept, in which particles are accelerated by being sent round and round a large, circular ring.

In the US, federal funding agencies don’t generally build infrastructure

David Morrissey, National Superconducting Cyclotron Laboratory

While the big-science spotlight might today be on particle physics – and especially the upgrade to CERN’s Large Hadron Collider – nuclear physics has plenty of big facilities too. The EU-funded Facility for Antiproton and Ion Research (FAIR) at Darmstadt in Germany, for example, is a heavy-ion research ring 1.1 km in circumference, which will (when it opens) have cost1.3bn to build. Pushing the boundaries of science can no longer be done on a shoestring.

Indeed, new facilities are often so big and expensive that labs almost always have to enter partnerships with governments, with negotiations sometimes spanning decades before construction starts. In the case of FRIB, it was first conceived in the 2000s and has been more than a decade in the making, despite Michigan State University having had an accelerator facility on site since the 1960s. Funding comes from the US Department of Energy’s Office of Science (DOE-SC), the university and the Michigan state government.

“In the US, federal funding agencies don’t generally build infrastructure,” points out David Morrissey, a nuclear chemist at NSCL. “They prefer to build instruments and fund the science. So the university contributed the money for the infrastructure, and it is very good at buildings.” Now about 95% complete, FRIB was officially designated a DOE-SC scientific user facility in September 2020, with more than 1400 scientists from around the world poised to conduct research at the site.

But with such a high price tag – plus annual running costs of $100m – the scientists and engineers constructing FRIB have had to spend carefully. “We have to be absolutely clear how the money is spent to show it is used effectively and not just squandered in any way,” Morrissey says. “We’ve survived by being innovative and doing things that other people can’t.” And that means firing beams that don’t exist anywhere else on Earth.

Accelerating knowledge

Construction began in 2014, but experiments have continued throughout that time at NSCL’s two existing cyclotrons, which provide nuclear physicists with ion beams thanks to funding from the National Science Foundation. When I visited FRIB in 2019, I discovered a vast and impressive maze of metal stairwells and poured concrete, leading to giant machines humming with power.

Once FRIB comes online early in 2022, the two older machines will be decommissioned and the lab will switch to a superconducting linear accelerator, in which the radio-frequency modules that speed up particles are placed in a line, rather than a ring. “The difference will be a 1000-fold increase in beam intensity,” says Paul Mantica, FRIB’s project manager. “Currently we’re running about half a kilowatt of beam power on the target on average. When we run FRIB, it’ll be 400 kilowatts on the target.”

The linear accelerator will be able to create beams of essentially all known stable isotopes, with the isotopes having first been separated from mined substances either by weight or through chemistry before stripping them of electrons. Physicists can then fire trillions of ions of any particular type per second at targets of heavier nuclei. Usually, the ions will miss the nuclei. But if they hit, both the target nucleus and ion will break apart.

FRIB clean room

Amid the chaos of flying particles, two nuclei will very occasionally collide, fusing to form an unstable isotope. Nuclear physicists will isolate the isotope using a “separator” – essentially a sequence of magnets that can be configured so that everything else is deflected out of the way. The pure, unstable isotope can then be studied and characterized through spectroscopy.

The FRIB targets themselves are placed on a carbon wheel that will be spinning at 5000 revolutions a minute to prevent the beam’s intense heat from burning it. The beam will then pass through four sets of the most advanced separators in the world to remove the unwanted reaction products. Indeed, it’s the sum of these different parts – accelerator, separator and instruments – that makes FRIB stand out from the crowd.

Researchers at FRIB are planning to use the facility to carry out three different types of experiments. One will be to study beams that travel at half the speed of light, with the high energy overcoming the inherent repulsion between the ions and target nuclei, enabling (as mentioned above) new isotopes to be created. Another will be to stop the beam in a given gas, allowing scientists to “weigh” isotopes or do laser spectroscopy on them.

“The third area is taking those stopped beams and re-accelerating them, producing energies more appropriate for astrophysics reactions,” says Mantica. “You have a gas target, the beam will then hit that target, and simulate events that happen in novae, in stars.”

Underground, overground

With lots of researchers expected to use the facility, FRIB is designed to operate at more than 90% availability, running 24 hours a day, 365 days a year. The facility is also modular, allowing electromagnets, sensors or other parts to be removed and replaced quickly and remotely. In fact, as Mantica explains, “Our goal is not to access the accelerator unless we have to. As it’s superconductive, we have to keep it particle-free. Any vacuum connections we do are performed in a cleanroom.”

Most users won’t even see the machines responsible for their experiments, given that the 500 m accelerator is located more than 10 m underground. In fact, getting heavy ions to half the speed of light is an arduous process, requiring four different types of accelerating structures. A radio-frequency quadrupole structure will be used to accelerate the beam before it reaches the first section of the linear accelerator, which will use “quarter-wave” resonators. After passing through a “charge stripper”, which increases the charge state of the beam and increases the accelerator’s efficiency and reduces costs, the beam enters the second and third sections with “half-wave” resonators.

To oversee them all, the device has some 19,000 electrical cables, with around 360 amplifiers controlling the accelerating superconducting resonators. Operation requires radio waves at frequencies of 80.5 MHz and 322 MHz. In fact, the facility had to obtain a radio-station licence in case of radio-frequency leaks, although in the end that proved unnecessary as FRIB has been designed so well that no waves will ever get out.

We’ve built FRIB for 100% helium recovery. Any helium used goes into a purifier and gets recycled back into the system

Paul Mantica, FRIB project manager

The superconducting resonators are made of pure niobium, cooled in 46 different cryomodules filled with liquid helium to keep the accelerator a few degrees above absolute zero (which allows it to be superconducting). Given that helium is such a precious resource, the facility takes its stewardship of this vital material seriously, with the gas stored in seven, vast 110,000 litre tanks on the roof. “We’ve built FRIB for 100% helium recovery,” Mantica says. “Any helium used goes into a purifier and gets recycled back into the system.”

Indeed, almost everything at FRIB is designed to go unwasted. This includes the beam, the majority of which will travel through its target. “About 300 kilowatts go wholly unreacted,” Mantica adds. “So we have a beam dump, a drum of titanium filled with water.” The beam goes through the titanium, interacts with water and makes even more rare isotopes – longer-lived isotopes that the community is interested in that can be harvested.

The heart of stars

FRIB is certainly an impressive facility, but the real benefit will come from the experimental results it generates. For Spyrou, the opportunities are tantalizing. “When you go just a few steps away from nuclei we’ve measured already, you find surprises all the time,” she says. “We have seen nuclei where, instead of all the neutrons being packed together in the nucleus, they are in a halo: one or more neutrons flying far away from the central nucleus. These are the kinds of things you discover far from stability.”

When you go just a few steps away from nuclei we’ve measured already, you find surprises all the time

Artemis Spyrou, National Superconducting Cyclotron Laboratory

These halo nuclei make the radius of the nucleus far larger than would be predicted by simple nuclear models, although they are relatively short-lived, with half-lives measured in milliseconds. “Lithium-11 [which has four more neutrons than its most common isotope] has the size equivalent of a lead nucleus,” Spyrou says. “There are really basic properties of nuclei that we would never discover if we didn’t have facilities like here, and who knows what we’re going to discover.”

figure 1

Typically, all known isotopes are represented on the chart of nuclides: a plot with the number of neutrons as its x axis, protons as its y axis (figure 1). All the known, stable isotopes form a long, snaking diagonal line: any isotope above the line has too few neutrons to be stable (it’s “proton rich”), while any isotope below the line has too many to be stable (it’s “neutron rich”). Many of the expected proton-rich isotopes have been discovered, but the area of potential neutron-rich nuclides has barely been scratched.

“Physics is just crawling in a limited area [of the chart],” says Witold Nazarewicz, FRIB’s chief scientist. “And that’s the role of FRIB. We’re going to probe reaction mechanisms experimentally. With rare isotope beams, we can probe more neutron-rich nuclei, which can then guide future experiments.”

As an astrophysicist, Spyrou is especially interested in the opportunity to explore the nuclear processes in the heart of stars – something so complex that not even the world’s most advanced supercomputers can model with any real accuracy. “We’ll finally be able to make the nuclei that are actually there, in a star, right now!” she enthuses. Her particular interest is in collisions, particularly the “r process” – a set of nuclear reactions that are responsible for about half of all nuclei heavier than iron.

The r process involves heavy seed nuclei capturing neutrons so fast that the nuclei don’t have time to radioactively decay. “It’s kind of a big mess, because neutron captures are tricky – even experimentally we can’t measure that, so there are entire areas we don’t know anything about,” Spyrou says. “But here [at FRIB] we can measure all the properties of those nuclei, and then extrapolate and say ‘OK, the process probably looks like this based on what we know today.’ How heavy is each nucleus? How long does it live for? We’ll actually be able to measure this and put it into a model that describes a neutron-star collision.”

Given the thousands of possible isotopes produced – and the multitude of possible reactions that could produce them – this means experimentalists and theorists at the facility have to operate hand in hand. “I work closely with a modeller,” Spyrou says. “And they’ll say that, out of the 10,000 reactions that are happening in a neutron-star merger, these 20 are the most important ones. As an experimentalist, I’ll then go and see what a facility like this can provide: what can my equipment measure?”

supernova

If the new measurements agree with theoretical predictions, it confirms that the researchers’ models work well. But if experiment and theory don’t tally, then both experimentalists and theorists have to figure out where the differences are coming from. “This is how new physics is discovered,” Spyrou says.

Those not involved in nuclear physics might well wonder if these rare isotopes – and the potential uncertainties of models that can only be seen through countless experiments – are really worth building a billion-dollar facility to study, especially when finite resources could be spent on applied technologies. For the FRIB team, however, real-world applications – for example, in medicine, energy, security and materials – are an important part of its research programme.

Indeed, Morrissey argues that big-science facilities are a good investment, citing the use of fluorine-18 in medicine. An isotope of fluorine with one neutron less than the element’s stable form (fluorine-19), it decays to create positrons, which can annihilate electrons to create light. Fluorine-18 has proved vital in positron emission tomography (PET) scanners, but its development relied on someone figuring out how to make fluorine-18 as a nuclear-physics target that they could then do chemistry on. “You don’t know before you start the experiment if someone will have a need [for what we create],” Morrissey says.

Spyrou agrees that nuclear physics is worth spending money on. “In 1944 scientists discovered americium-241,” she says. “Then someone else came along and realized you could use it to make a smoke detector. And when technetium-99m was discovered in 1938, they didn’t realize it was going to be used for medicine; now every hospital has it.” The same will be true for FRIB, she believes. “I don’t know how FRIB’s discoveries will be important. But I do know they will be.”

Quantum gate teleportation connects atomic qubits in two labs

Researchers in Germany have performed a quantum gate operation between two quantum bits (qubits) in different laboratories. This marks a step towards distributed quantum logic, whereby system designers could build modular quantum computers, spreading qubits between different devices while allowing them to behave as one computer. Distributed systems would avoid crosstalk between qubits, which degrades quantum computations.

Adding qubits to a quantum computer is far trickier than adding bits to a classical one, as each qubit (which may be a trapped ion, a superconducting circuit, a diamond nitrogen–vacancy centre or many other physical manifestations of a quantum state) must be able to undergo the necessary logical interactions while also being protected from noise – which can destroy quantum information.

A significant noise source is interference between multiple qubits: “Let’s say there are three or four qubits in one device and you want to do a gate between just two of them,” explains Severin Daiss of the Max Planck Institute of Quantum Optics in Garching; “As they are all in one device you can still have crosstalk of those two qubits with the other qubits that should not participate in the calculation.” The more qubits are added to a single device, the more severe the crosstalk problem becomes. Other factors that cause problems in specific platforms are the difficulty of addressing specific qubits in large registers, restricted space, and problems with heat removal from large cryogenic samples.

Multiple devices

One possible way to scale up a quantum computer without scaling up the attendant problems would be to spread the qubits between multiple devices. However, this would require integrating the quantum logical operations performed on each device: “If you just calculate one result with one module and send the state to another module, you’re still not increasing the computational space that you have,” explains Daiss. “Quantum gate teleportation” – the construction of quantum gates whose output is conditional on the state of an input gate elsewhere – has therefore become an active field of research. Such gates have been demonstrated between ions in the same trap and superconducting circuits in a single cryostat, and one with photonic qubits, albeit with a tiny success rate.

In the new research, Daiss and colleagues led by Gerhard Rempe unveil a radically different, conceptually-simpler gate that is based on the interaction of a single photon with modules in two different laboratories. In each laboratory, they set up an optical cavity containing a single rubidium atom and they link the two systems using a 60::m optical fibre.  To implement the gate, they send a photon as a “flying qubit” along the fibre and reflect it successively from the two cavities, thereby entangling its polarization with the rubidium energy levels. A measurement of the photon is then combined with a conditional feedback on the qubit to realize a CNOT gate – one of the key components of quantum logic.

Heralded quantum gate

The protocol produces a “heralded” quantum gate in which the detection of the photon signals a successful gate operation. In future, this could prove crucial to producing a reliable quantum computer as such a confirmation that each successive gate has worked is important if multiple gates are connected in sequence. Other platforms could theoretically produce quantum gates using the researchers’ protocol, says Daiss, if the qubit could be coupled sufficiently strongly to a cavity or resonator. For instance, this has already been achieved with trapped ions or superconducting qubits.

In future, says Daiss, a next step would be to connect together modules comprising more than one qubit and producing computers with more than one module: “We could go in either direction, and both directions will benefit from the work we’re doing at the moment,” he concludes.

Ronald Hanson of Delft University of Technology in the Netherlands believes the paper marks an important step forward: “They just have this one photon scattering off one side, going to the other side and then you measure it. Conceptually it’s super simple, and they show that it works.” he says. “So it’s the fact that it’s heralded, and its efficiency – I think that’s the real novelty of the work.”

The research is described in Science.

Open-source algorithm predicts heart attack risk from chest CT scan

Chest CT scans

A heart attack occurs when the coronary arteries responsible for supplying the heart with blood and oxygen become blocked. This does not happen overnight and is usually the result of fatty or calcium plaques being slowly deposited. At first, these plaques hinder the efficient supply of blood to the heart muscle (myocardial perfusion). Eventually, the rupture of one plaque can cause a blood clot to form, blocking the coronary arteries and preventing myocardial perfusion. For this reason, coronary artery calcification is an important and independent predictor of adverse cardiovascular events such as heart attacks.

But despite this knowledge, and the fact that it can be assessed from any chest CT scan, quantification of coronary artery calcium (CAC) is not automatically integrated in the patient pathway as it requires radiological expertise, time and specialized equipment. To remedy this, a multidisciplinary team from Brigham and Women’s Hospital‘s Artificial Intelligence in Medicine Program led by Hugo Aerts, and the Massachusetts General Hospital’s Cardiovascular Imaging Research Center led by Udo Hoffmann, developed and tested a deep-learning algorithm that can automatically quantify CAC from any chest CT scan. Their findings are reported in Nature Communications and the algorithm is available as free open source software at the AIM website.

“In theory, the deep-learning system does a lot of what a human would do to quantify calcium,” said first author Roman Zeleznik. “Our paper shows that it may be possible to do this in an automated fashion.”

Developed from 1636 scans, tested on 20,084 patients

To develop the algorithm, the researchers used scans from the Framingham Heart Study – a seminal yet ongoing cardiovascular study that since 1948 has been investigating the health of the residents of Framingham, MA. They used 1636 CT scans from the study (acquired from the third generation onward) to identify and quantify CAC, using manual segmentations performed by expert CT readers as ground truth to train the deep-learning system.

The deep-learning system uses three consecutive convolutional neural networks to predict the heart centre, segment the heart, and segment and identify coronary calcium in less than 2 s. It then computes the CAC scores and stratifies them into clinically relevant categories:  very low (CAC=0); low (CAC=1–100); moderate (CAC=101–300); and high (CAC>300).

The strength of the study lies in the breadth and scope of the datasets that the deep-learning system was subsequently tested on. The team used four cohorts focusing on different pathologies: 663 Framingham Heart Study participants who underwent cardiac CT (none of whom were in the training group); 14,959 heavy smokers having lung cancer screening CT (NLST trial); 4021 patients with stable chest pain having cardiac CT (PROMISE trial); and 441 patients with acute chest pain having cardiac CT (ROMICAT-II trial).

First, the team investigated whether the system was accurate in 5521 scans where expert’s measurements were available. The algorithm’s CAC scores were highly correlated with the manual scoring and most differences occurred between adjacent risk categories.

Predicting future cardiovascular events

The team also investigated the predictive potential of their system. As patients enrolled in these studies usually had a follow-up visit, the researchers looked for correlations between CAC scores and cardiovascular events. For example, in the NLST study, median follow-up time was 6.7 years after the first CT scan was acquired. They saw a clear association between CAC scores and death from cardiovascular disease: taking the very low CAC score group as reference, the low, moderate and high score groups had 57%, 179% and 287% more such deaths, respectively. Similar trends were observed in the other studies.

The diversity of the datasets used strengthens the generalizability of these results to clinical settings. “The coronary artery calcium score can help patients and physicians make informed, personalized decisions about whether to take a statin [a medication which reduces cholesterol and the risk of heart attack]. From a clinical perspective, our long-term goal is to implement this deep-learning system in electronic health records, to automatically identify the patients at high risk,” concludes co-senior author Michael Lu.

Complex light waves measure hidden objects

Researchers in Europe have developed a technique for obtaining accurate laser-based measurements of objects hidden behind complex scattering structures. They say that their mathematical formula, which calculates the optimal waveform needed for the scattering environment, can also be applied to other types of waves.

Laser beams are great for making precision measurements of objects. But only if the object can be seen clearly. If the object is behind or within a disordered environment – such as behind a cloudy pane of glass or embedded in complex biological tissue – these measurements become much more challenging. Both media will scatter and alter the light waves as they pass through, so the object can’t quite be seen, making it hard to obtain useful measurements.

The difficulty with such situations is that the scattering medium is overwhelmingly complex and unknown. This makes developing a universal approach for imaging through complex scattering systems challenging. To overcome this, Dorian Bouchet and colleagues at Utrecht University and TU Wien developed a mathematical formula that characterizes the scattering behaviour of the system.

Their analysis shows exactly how the disordered medium is affecting the light beam. This allows them to then create a complex wave pattern that gets scattered and altered by the disturbing environment to create the optimal beam of light needed to make the measurements. “You don’t even need to know exactly what the disturbances are,” explains Bouchet, now at Université Grenoble Alpes. “It’s enough to first send a set of trial waves through the system to study how they are changed by the system.”

The researchers also tested this idea experimentally. At Utrecht University, they shone a continuous-wave solid-state laser emitting at 532 nm through a diffuser – the disordered medium. They then successfully measured objects on the other side with nanometre precision, after calculating the optimal light waves needed.

“We characterize the medium by successively sending 2400 plane waves with different propagation directions, and by measuring the light scattered from the medium for each one of them,” Bouchet tells Physics World. “We also measure how this scattered light changes as a function of the observable that we want to precisely measure. Remarkably, even though we first illuminate the medium with plane waves only, we are able to predict what will happen for any other kind of wave that we can use to illuminate the medium.”

The researchers say that because the results of the scattering analysis are universally applicable, they can be transferred to other types of waves, such as acoustic waves or those in the microwave regime.

The work was linked to a programme for nanometre-scale imaging of semiconductor structures, Bouchet tells Physics World, noting that the production of computer chips could be a good application for this technique as extremely precise measurements are indispensable. But the researchers also believe that it could have applications in biology. “However, we first need to address the question of what will happen for a medium that changes in time, for instance due to the flow of blood in vessels,” Bouchet says.

As well as investigating time-dependent media, the team is also trying to further increase the precision of measurements by using non-classical states of light. These are more difficult to generate, Bouchet says, but can in principle lead to more precise measurements.

The researchers report their findings in Nature Physics.

Laser light induces giant current in topological material

Weyl points in a Dirac semimetal

Researchers in the US have induced a huge and nearly dissipationless current in zirconium pentatelluride – a technologically important material that exhibits strong topological electronic effects – by illuminating it with laser light. The mechanism responsible for the effect, which occurs as the material’s crystal lattice “twists”, might find use in quantum computing applications and high-speed, low-power electronic devices.

Topological insulators are electrically insulating in their bulk, but they can conduct electricity extremely well on their surfaces (or edges) via special electronic states that are protected from fluctuations in their environment. Within these states, electrons can only travel in one direction (or channel) and do not backscatter. Since backscattering is the main dissipating process in electronics, topological insulators can carry electrical current with near-zero dissipation. This means they could be used to make electronic devices that are far more energy-efficient than any that exist today. The dissipationless electric currents in topological materials could also become the basis for quantum bits (qubits) because the materials shield fragile quantum states from impurities and lattice vibrations.

For physicists, zirconium pentatelluride (ZrTe5) is a particularly interesting topological material because it exists in a wide variety of topological phases. Indeed, depending on how it is configured, it can behave as a weak or strong topological insulator, a 3D quantum Hall state or a Dirac semimetal. In the latter phase, ZrTe5 exhibits exotic electron conduction behaviour thanks to the unique state of its crystal lattice and electronic structure, which protect electrons from backscattering, explains Jigang Wang, a physicist at the US Department of Energy’s Ames Laboratory and Iowa State University. “The anomalous electron transport channels protected by the symmetry and topology in ZrTe5 don’t normally occur in conventional metals such as copper,” he says.

Light-induced giant current

Wang and colleagues at Brookhaven National Laboratory and the University of Alabama at Birmingham discovered that they could generate a huge current in ZrTe5 by exposing the material to laser light at terahertz frequencies. This current arises because light at these frequencies triggers vibrations, or phonons, in the material’s crystal lattice that distort its symmetry.

The researchers explain that this effect, which they term phononic symmetry switching, gives rise to points in the twisted lattice where electrons behave like Weyl fermions – massless particles capable of carrying the sought-after, protected, low-dissipation current.

Weyl fermions were first predicted in 1929 by the theoretical physicist Herman Weyl, who identified them as possible solutions of the Dirac equation. Electrons of the Weyl type behave very differently to electrons in ordinary metals or semiconductors. Indeed, Wang and colleagues found that the electrons in their twisted ZrTe5 travel at very high velocities – near a fraction of light speeds (~c/300) – over distances as long as 10 microns.

Universal topology control principle?

The researchers, who report their work in Nature Materials, note that phononic symmetry switching enables them to control the flow of electrons without using electric or magnetic fields. Such a fast, low-energy, symmetry-based switch was lacking until now, says team member Qiang Li, who heads the Advanced Energy Materials Group at Brookhaven. “Our results add to a broader picture of quantum control of topological systems with symmetry selective infrared and Raman coherent phonon pumping suitable for these applications,” Li tells Physics World.

The team says it now plans to extend its method to explore other topologically-based ways of controlling electron flow in materials relevant for topological effect transistors, quantum computing and spintronics. “This light-induced Weyl semimetal transport and topology control principle appears to be universal and will be very useful in the development of future quantum computing and electronics with high speed and low energy consumption,” says team member Ilias Perakis of Alabama-Birmingham.

US stamp features parity-violation pioneer Chien-Shiung Wu, a quantum of softness

Parity pioneer Chien-Shiung Wu

The US Postal Service (USPS) has issued a commemorative stamp honouring the Chinese-American physicist Chien-Shiung Wu. The 1957 Nobel Prize for Physics was shared by Chen Ning Yang and Tsung-Dao Lee “for their penetrating investigation of the so-called parity laws which has led to important discoveries regarding the elementary particles”. However, some physicists argue that Wu should have shared the prize for providing the experimental evidence for Lee and Yang’s theoretical prediction of parity violation. Wu now follows in the footsteps of Albert Einstein, Enrico Fermi, Richard Feynman and Maria Goeppert Mayer with a commemorative stamp. Just a shame she did not bag a Nobel prize too.

We love all things quantum here at Physics World, and it seems like we are not alone. Quantum Quilts toilet tissue is made by the UK’s Leicester Tissue Company and it has been spotted at the discount retailer Poundland. The photo below was posted on Twitter by ieva with a rather cheeky caption.

Scroll down the replies to ieva’s tweet and you will find the alternative captions “Entanglement-enhanced absorption” and “Spooky wiping at a distance” – and there is also tweet from someone who uses a photo of the related brand Quantum Soft as their background photo.

Nanotube artificial muscles pick up the pace

An electrochemically powered artificial muscle made from twisted carbon nanotubes contracts more when driven faster thanks to a novel conductive polymer coating. The device, which was developed by Ray Baughman of the University of Texas at Dallas in the US and an international team of collaborators, overcomes some limitations of previous artificial muscles, and could have applications in robotics, “smart” textiles and heart pumps.

Carbon nanotubes (CNTs) are rolled-up sheets of carbon with walls as thin as a single atom. When twisted together to form a yarn and placed in an electrolyte bath, these hollow carbon cylinders can be made to expand and contract in response to electrochemical inputs, much as a human or animal muscle does. In a typical setup, a voltage difference, or potential, between the yarn and a counter electrode drives ions from the electrolyte into the yarn, causing the “muscle” to actuate.

While these electrochemically driven CNT muscles are highly energy efficient and extremely strong – they can lift loads up to 100,000 times their own weight – they do have limitations. The main one is that they are bipolar, meaning that the direction of their movement switches whenever the potential drops to zero. This effect reduces the overall stroke of the actuator. Another drawback is that the muscle’s capacitance – that is, its ability to store the charge it needs to expand or contract – decreases when the potential is scanned more quickly, which also causes the stroke to decrease.

Polymer “guest”

In this study, as in their previous work, Baughman and colleagues created their artificial muscle from a “forest” of CNTs all vertically aligned in the same direction. Next, they drew a thin sheet of nanotubes from the forest and twisted it to make a yarn containing helices of intertwined CNTs. In the final step, which was unique to this series of experiments, they coated the interior surfaces of the CNTs with an ionically conducting polymer that contains either positively or negatively charged chemical groups.

The first polymer “guest” material the group studied was poly(sodium 4-styrenesulphonate), PSS. The resulting structure is known as a PSS@CNT yarn and contains around 30 percent PSS by weight. To determine the zero-charge potential of this yarn – that is, the potential at which the stroke switches direction – the researchers used a technique called piezoelectrochemical spectroscopy, which they developed themselves. They then tested their yarns in baths of either aqueous or organic electrolytes.

Bipolar to unipolar

Baughman and colleagues, who report their work in Science, found that the polymer coating converts the normally bipolar actuation of CNT yarns into unipolar actuation. In other words, the coated muscle actuates in only one direction over the entire potential range at which the electrolyte remains stable.

The team’s explanation for this unusual behaviour is that the dipolar field of the polymer shifts the yarn’s zero-charge potential to a value that lies outside the electrolyte’s stability range. This means that ions of only one polarity (positive or negative) are driven into the yarn, explains team member Zhong Wang. Hence, the muscle’s stroke changes in only one direction before the direction of voltage change reverses. Team member Jiuke Mu adds that the number of electrolyte molecules that are electro-osmotically pumped into the muscle also increases the faster the potential is changed, or scanned, across its range.

As for the new unipolar muscles’ performance, the researchers found that the maximum average output mechanical power they generate is 2.9 W per gram of muscle. This is about 10 times the typical capability of human muscle, Mu says, and about 2.2 times the weight-normalized power capability of a turbocharged V-8 diesel engine.

Dual-electrode, all-solid-state yarn muscle

In the final stage of their research, the scientists demonstrated that they could combine two different types of unipolar yarn muscles to make a dual-electrode, all-solid-state yarn muscle, thereby dispensing with the need for a liquid electrolyte bath. Here, Wang explains that a solid-state electrolyte laterally interconnects two coiled CNT yarns containing different polymer guests – one with negatively-charged substituents, and the other with positively charged ones. The injection of positive and negative ions means that both yarns contribute to actuation during charging, Wang says. He suggests that such dual electrode unipolar muscles could, in the future, be woven together to make actuating textiles that “morph” in response to electrical stimuli.

Members of the team, which includes scientists from the University of Illinois at Urbana-Champaign, Changzhou University, Jiangsu University, the Harbin Institute of Technology, Hanyang University, Seoul National University, Deakin University, the University of Wollongong, Opus 12 and MilliporeSigma, now plan to exploit these muscles in robots and artificial limbs as well as textiles.

Converted clinical linac delivers FLASH radiotherapy

Researchers from Dartmouth have developed a method to convert a standard clinical linear accelerator (linac) used for radiation therapy to deliver a FLASH-capable, ultrahigh-dose rate (UHDR) radiotherapy beam. The process, which uses existing accessories, takes only 20 minutes to perform, or to reverse.

UHDR radiotherapy delivers radiation at dose rates that are hundreds, or even thousands, times higher than used in conventional treatments, leading to a phenomenon commonly referred to as the FLASH effect. Adapting a linac to deliver radiation at 300 Gy/s rather than 0.1 Gy/s enables treatment to be completed in 6 ms instead of 20 s. Crucially, preclinical research with laboratory animals has shown that these high dose rates significantly reduce toxicities to surrounding healthy tissue while maintaining anti-tumour activity.

“We believe this is the first reversible UHDR beam on a clinically used linac where the beam can be used in the conventional geometry where patients are on the treatment couch,” says Brian Pogue, of the Thayer School of Engineering and the Norris Cotton Cancer Center.

Writing in the International Journal of Radiation Oncology, Biology, Physics, Pogue and colleagues describe the procedures and guidelines that they developed to deliver UHDR to a treatment room isocentre.

The team converted a Varian Clinac 2100 C/D to deliver UHDR electron beams using existing accessories including jaws, applicators and cutouts. The conversion was performed by setting the treatment console to “service mode” and manually resetting some key components of the treatment delivery system – the carousel, air valve and target – with the gantry angle set at 90 degrees to access these components.

Modifications (which could be completed within 20 minutes) included retracting the X-ray target and flattening filter from the beam’s path, positioning the carousel on an empty port and selecting 10 MV photon beam energy in the treatment console to deliver electron beams. To convert the linac back for conventional radiotherapy use, this process is simply reversed.

Following the conversion, the researchers used film and an optically stimulated luminescent dosimeter (OSLD) to measure dose-rate, surface and depth–dose profiles in solid water phantoms. They used a fast photomultiplier tube-based Cherenkov detector to measure per pulse beam output at a 2 ns sampling rate.

FLASH accomplished

Lead authors Mahbubur Rahman and Ramish Ashraf of the Thayer School of Engineering and colleagues report that the converted system could achieve dose rates of up to 290±5 Gy/s at the isocentre (100 cm source-to-surface distance), well above the reported 40 Gy/s threshold needed to potentially achieve the FLASH effect. The doses measured from simultaneous irradiation of film and OSLD agreed to within 1%. The radial symmetry of the beams was within 0.2% at 290 Gy/s. The Cherenkov detector showed that the linac required a ramp-up period for the first 4–6 pulses before the output stabilized to a stability within 3%.

Based on these findings, the researchers believe that with further tuning of the beam output and reduced source-to-surface distance, they could achieve dose rates of up to 600 Gy/s. The variability of the radiation dose from the first few pulses may require a dose monitoring and stopping system for future clinical translation studies, and upwards of 10 pulses may be needed when performing preclinical animal investigations. Work is underway to develop a low-cost translatable controller circuit that could be used with the converted linac.

The researchers are currently using the UHDR beam in preclinical studies on experimental animal tumours, as well as in clinical veterinary treatments. Murine studies are underway to examine the nature of the normal tissue sparing from radiation damage that the FLASH effect confers. Veterinary treatments to dogs with sarcoma tumours are being used to test out the ability to safely deliver this beam.

“Further oxygen consumption from the FLASH beam has been widely postulated to be one of the factors that could lead to the FLASH effect of normal tissue damage reduction, and so, in vivo studies of this effect are being completed,” Pogue tells Physics World.

“Additionally, radiation oncologists and dermatologists have joined the team to design a human safety clinical trial using FLASH radiotherapy to treat patients with advanced skin lesions that cannot be surgically removed,” he adds. “There are a number of advanced lesions where, because of poor perfusion or lesion location, surgical removal would not be ideal. These might be better treated by radiotherapy, especially if there is a slightly enhanced sparing of the normal skin from the FLASH effect. The team is planning for this future trial to evaluate the safety of UHDR delivery in human treatment.”

Earthquake intensities could be reduced by injecting fluids, soggy paper experiment reveals

Seismic kitchen roll

A seismic model based on kitchen roll (paper towels) has been used to show that the intensity of earthquakes can be reduced by at least a factor of ten by injecting fluid into the ground. Designed by researchers in France, the model could lead to proactive seismic interventions to reduce the risk of disasters and also help to guide the industrial projects involving underground fluid injection.

Earthquakes occur when elastic energy stored in the Earth’s crust is suddenly released by movement along a fault line. This motion can result naturally from a critical built up of tension or it can be triggered accidentally by the injection of fluids in the ground near a fault. The latter can occur during oil extraction, wastewater disposal or deep geothermal projects. Understanding the nature of such induced seismicity is key to mitigating risk.

In their study, geophysicist Ioannis Stefanou and colleagues at the Ecole Centrale de Nantes modelled a simple dip–slip fault by clamping kitchen roll – or, as the team calls it, “absorbent porous paper” – between an anchor and a spring. The spring is slowly stretched to increasing tension to the model system in a manner analogous to how the tension increases in a seismic system. When the tension is sufficient, the paper tears and the spring releases the equivalent amount of energy as a magnitude 5.9 earthquake, were the model scaled up represent a real, 6.5 km long fault.

Reduced friction

The paper is divided into strips, which the researchers used to represent segments of the fault each under the influence of a different injection well. On a real fault, the injection of pressurized fluids causes the apparent friction of the system to drop, with the potential to reactivate the fault and initiate slip. In the model, a comparable stress drop can be brought about by wetting the kitchen roll which causes the strength of the paper to reduce prior to failure.

“By progressively wetting the porous paper, we simulated fluid injections in the Earth’s crust. Each injection was accompanied by tremors, which progressively released energy and modified the energy budget of the system,” explains Stefanou. “Our experiments show that, without precise knowledge of the fault properties, we risk destabilizing the system and provoke a large seismic event,” he continued — noting the relevance to real-world fluid injection scenarios that could induce seismicity.

“However, he adds, “provided that the model’s key parameters – fault segmentation, segment-activation rate, and stress state – are well known or controlled, the natural rupture can be mitigated by at least one unit.” Mitigation of the fault’s stored energy via provoking small quakes was only possible, the team found, only when the stress state of the system was low enough at the start that the whole fault region was not reactivated.

Introducing heterogeneities

The concept could be adapted to mirror different types of fault configuration by changing the geometry of the paper samples, Stefanou said, as long as the scaling law connecting the physical model with the real-world system was updated as well. “Using, let’s say, tape or glue, one could introduce heterogeneities or reinforce and waterproof some parts of the samples,’ he explained. “This would change the local stress state, create permeability paths and give more options for new experiments.”

Stefan Nielsen, a seismologist at the University of Durham comments “One attractive feature of this model is that it allows to explain the physics of the process in an intuitive way, showing that the mathematics describing the reality and the laboratory are similar”. However, he cautions, there can be problems in translating laboratory observations to the real Earth.

“In this example, the tectonic loading increases until failure is reached, a process which takes hundreds of years in the real Earth. Fluid injection, on the other hand, would be almost instantaneous with respect to the tectonic loading time. In nature, a fluid injection would suddenly permeate a fault which is under a virtually constant state of load, instead of a gradually increasing load as in this model,” Nielsen explains.

Dramatic failure

“Therefore, on natural faults fluid injection may result in a much-increased risk of dramatic failure, unless the fault is caught in the early stages of the seismic loading cycle,’ he concludes.

The paper fault model is part of a larger project, funded by the European Research Council, to determine if it is possible to reliably control earthquake instability to minimize loss of life and economic disruption. “In my group we focus on proving (or disproving) mathematically the possibility of controlling the seismic slip,” Stefanou explained. “This is like designing a cruise control system for faults, but it is much more difficult than cars!”

Alongside laboratory-based tests with surrogate models, the team also work with computer simulations. Stefanou concluded: “Our results up to now are very promising and show us how fluids have to be injected in order to prevent earthquakes, even in the case of high stress level ratios.”

The study is described in Geophysical Research Letters.

Copyright © 2026 by IOP Publishing Ltd and individual contributors