Skip to main content

Fungus turns wood piezoelectric, allowing it to power LEDs

Infecting wood with wood-decay fungus can boost its piezoelectric output by 55 times, researchers in Switzerland have discovered. The material scientists found that after 10 weeks of infection, blocks of decayed wood could power LEDs. They say that floors built from fungus-treated wood could generate renewable electricity from people’s footsteps.

Decades ago, scientists discovered that wood generates an electrical charge under mechanical stress. This piezoelectric effect is caused by the displacement of crystalline cellulose when it is deformed, whereby shear stress in one plane produces an electrical polarization perpendicular to it. But the piezoelectric effect is not very strong – around twenty times smaller than that of a quartz crystal – and wood does not deform easily.

Despite this, some researchers are keen on exploiting this property by creating piezoelectric construction materials that could help make buildings more energy efficient. Globally buildings are responsible for around 40% of our energy consumption and nearly 25% of our greenhouse gas emissions. Current attempts to minimize emissions involve reducing energy consumption or fitting buildings with solar panels so they generate their own electricity. While this can be effective, it is weather dependent and does not work everywhere. Piezoelectric construction materials could offer another source of clean energy.

Dissolving lignin

The piezoelectrical performance of wood can be improved by changing its structure. Recently Ingo Burgert, at ETH Zurich, and his colleagues found that placing wood in a mixture of hydrogen peroxide and acetic acid increases its piezoelectric output. This process dissolves the lignin in the wood leaving behind a cellulose framework that is much more flexible and elastic. When squeezed, 1.5 cm cubes of this acid-treated wood generated an output of 0.69 V, which is 85 times higher than untreated wood. This performance was stable for 600 cycles and 30 connected blocks powered light-emitting diodes (LEDs) and a simple liquid-crystal display.

Keen to create the same effect, but without the harsh chemicals, Burgert and colleagues turned to a natural process that alters the structure of wood: decay by fungi. In their latest work, described in Science Advances, they infected balsa wood with the white rot fungus Ganoderma applanatum for 4–12 weeks. After 10 weeks the wood had lost 45% of its weight and the researchers found that at this point it showed the best compressibility performance, while still returning to its original shape once the stress was released.

A single 1.5 cm cube of this decayed wood produced a maximum voltage of 0.87 V under 45 kPa of stress, while uninfected balsa wood generated 0.015 V. The treated wood maintained its performance for 500 cycles. Electrical output increased with mechanical stress, rising to 1.32 V at 100 kPa. Nine of the decayed-wood blocks connected in parallel were able to power an LED, when pressed strongly.

Cellulose remains intact

Infrared spectroscopy and X-ray diffraction analysis of decayed and untreated wood showed how the fungus altered the wood. “The selected wood decay fungi secrete enzymes that enable degradation of lignin and hemicelluloses in the wood, whereas cellulose remains intact,” Burgert told Physics World. “This type of wood decay is also known as selective delignification. This process changes the structure and chemistry of the wood cell wall enhancing the natural piezoelectric properties of wood.”

The researchers say that their results indicate that the material could be used to produce large-scale wooden floors, such as those in ballrooms, that could generate electricity from human activity.

“We are currently working at the demonstrator scale with delignified wood that can be used for sensors integrated into wooden floors,” Burgert says. “For instance, these systems could be used as security systems in wooden floors for detection of any kind of applied stress. In terms of power generation, it is on the level of lighting up LED lights, and therefore, at present the application as a sensor is more suitable. However, it is a first step and we are currently optimizing towards wood-based systems better suited for energy harvesting.”

Due to its lignin composition delignification during fungal infection is much faster in balsa than other woods such as spruce, pine and fir. “The next step is to use this concept for native wood species and incorporate the generated materials in future smart buildings,” Burgert adds.

Contrast-enhanced OCT visualizes vascular leakage in the retina

Contrast-enhanced OCT

Vascular leakage is an important biomarker for assessing vision-threatening retinal diseases such as age-related macular degeneration and diabetic retinopathy. Fluorescence angiography, the imaging exam currently used to identify vascular leakage, lacks depth resolution, which can hamper the identification and precise localization of leaky blood vessels. Optical coherence tomography (OCT) is a newer clinical ophthalmic imaging technology that provides high-resolution images and rapid volumetric scanning. To date, however, OCT has been unable to visualize vascular leakage.

Researchers at the Medical University of Vienna are developing a new OCT method, exogenous contrast-enhanced leakage OCT (ExCEL-OCT), which measures the diffusion of tracer particles around leaky vasculature. Writing in Biomedical Optics Express, they describe the use of an OCT contrast agent to visualize the slow extravasation of tracer particles from leaky blood vessels in laboratory mice. In just a single scan, ExCEL-OCT provided high-resolution structural, angiographic and leakage information that was spatially and temporally co-registered, and separable.

The researchers used a custom-built OCT ophthalmoscope designed for rodent eye imaging. For the contrast agent, they selected Intralipid 20%, an emulsion of lipid particles that can dramatically improve OCT intensity and angiogram signals.

Conrad Merkle

Principal investigators Bernhard Baumann and Conrad Merkle, of the Center for Medical Physics and Biomedical Engineering, and colleagues imaged the eyes of mice with leaky retinal vasculature and control mice. To track the leakage of tracer particles, they performed multiple angiogram scans (using a traditional 3D angiography protocol covering a 1 x 1 mm field-of-view centred on the optic disc) before and after injection of the OCT contrast agent, using the data to generate both angiogram and leakage maps.

The researchers employed post-processing to compensate for motion and flatten the retina. They also developed novel data processing methods to highlight the scattering signal from the extravasated Intralipid particles. To discriminate between the various signals, they used selective decorrelation gates.

“The key idea is that OCT signals in static tissue will decorrelate slowly because the tissue is not moving,” the researchers explain. “Signal within vessels will decorrelate rapidly due to the relatively high speed of the red blood cells and Intralipid particles passing through the voxel. Extravasated Intralipid particles, which are driven by slower diffusion processes rather than blood flow, will have a decorrelation rate that falls somewhere between the two.” As such, they expect much stronger extravasation and diffusion signal will be observed around leaky vessels.

The researchers used long interscan times to highlight diffusing tracer particles. They created the ExCEL signal by subtracting angiogram signals of different lag times, to specifically highlight leakage of different diffusion rates and remove the intravascular signal. They also created depth profiles, leakage maps and fly-through videos showing leakage over time.

The contrast agent dramatically increased the visibility of neovascularizations (new blood vessels) growing into a retinal lesion. Vascular leakage could be tracked over time, with results demonstrating a clear increase in ExCEL signal, visible as a circular or spherical bloom of signal, following administration of the contrast. The researchers confirmed this finding by blind grading 83 leakage and control volumes.

In addition to showing leakage in 3D, the researchers also demonstrated that for the most part, the leakage signal was separable from the angiogram signal, which is not the case for traditional fluorescence methods. Colour coding and overlaying the angiogram and leakage data revealed which blood vessels the leakage surrounds.

The researchers note that the current system produces a high number of false-positive leakage signals, and that they could not exclusively distinguish diffusion from slow intravascular flow or bulk motion of static tissue. As OCT systems become faster and motion compensation software improves, higher temporal resolution for ExCEL measurements and shorter interscan times may change this.

The team describe this work as “a starting point for future in vivo 3D volumetric leakage studies”, noting that the new method can be easily implemented in conventional OCT systems across the world.

“We hope to continue to improve our methods to the point where they can be used clinically,” Merkle tells Physics World. “To do this, we need to continue to develop the methods and post-processing to improve signal quality and reduce false-positive signals, investigate alternate sources of contrast, and/or improve the contrast agent to increase sensitivity and reduce dose. We are most interested in the first two options, as nanoparticle fabrication is something that other research groups are already investigating with great results.”

“Beyond this work on vascular leakage, we are also investigating hardware development of polarization-sensitive and visible light OCT systems, longitudinal studies of small-animal disease models, and ex vivo imaging of human brain tissue,” Merkle adds. “My own projects are currently focused on improving clinical OCT through software rather than hardware. Our hope is that by working within the limitations of existing clinical systems, we can develop methods to improve diagnostics at a larger scale and lower cost compared to hardware-based solutions that require new OCT systems.”

Photonic-crystal ‘sunflower’ follows the light

An artificial “sunflower” that autonomously bends, folds and twists itself to optimize the amount of light it receives could be a key ingredient in intelligent solar cells of the future. The device is made from a biopolymer-based photonic crystal and its developers say that future versions might be able to track the Sun across the sky, just as real sunflowers do.

Photonic crystals are nanostructured materials with a refractive index that varies on a length scale similar to the wavelength of visible light. This periodic variation produces a so-called “photonic band gap” that affects how photons propagate through the material, allowing light at some frequencies to be absorbed (thereby heating the material) while other frequencies are reflected. The angle at which the lights hits the crystal also affects which frequencies are absorbed.

Double-layer structure

The photonic material designed by Fiorenzo Omenetto and colleagues at Tufts University and Northwestern University in the US consists of two layers. The top layer is made of an opal-like film of silk fibroin doped with light-absorbing gold nanoparticles (AuNPs). Underneath this is a silicon-based polymer, polydimethylsiloxane (PDMS).

The team, which also includes researchers at the University of Pavia in Italy, chose silk fibroin for its flexibility and promising optical properties as well as its negative coefficient of thermal expansion (CTE). The latter means that silk fibroin contracts when heated and expands when cooled – unlike most materials, which have a positive CTE and do the opposite. PDMS, importantly, has a high positive CTE and expands rapidly when heated. Hence, when the bilayer photonic crystal is exposed to laser light, it bends as the PDMS expands and the silk layer contracts.

Bending with the light

The team made their photonic crystal more reflective by adjusting the size of the unit cells within the crystal. Omenetto explains that they did this by constructing patterns in the silk layer using a lithography technique that involves either exposure to UV light or applying stencils to the material and then exposing them to water vapour. Thanks to these nanostructured patterns, the silk layer can either enhance or weaken the interaction between the gold nanoparticles and the laser light, depending on the angle at which the laser beam strikes it.

Together, these design features allow the material to bend, fold and twist in ways that depend on the geometry of the patterns and the wavelength of the incident laser beam. They also, crucially, enable the material to track the path and angle of a light source. The researchers demonstrated this function by fabricating a photonic “sunflower” with solar cells integrated into the silk fibroin-PDMS bilayer. The resulting device curls towards the light source as the source moves, similar to the way that a real sunflower tracks the Sun as different sides of its stem elongate at different times of the day.

Light-tracking devices

Omenetto explains that the team’s device keeps the angle between the solar cells and the laser beam nearly constant, maximizing the cells’ light-to-energy conversion efficiency as the laser moves. Such wireless, light-responsive, heliotropic (Sun-tracking) systems could be used to make improved solar cells, he says.

The researchers also made a self-folding box and a “butterfly” with wings that opened and closed in response to light. Spurred on by their results, which they detail in Nature Communications, they plan to adapt their optomechanical actuator so that it works in different parts of the electromagnetic spectrum. “We also hope to make sunlight-tracking devices that can be used outside of the laboratory,” Omenetto tells Physics World.

Graphene gives neural interfaces a boost, the amazing physics of hearing and vision

This episode of the Physics World Weekly podcast looks at how new technologies can improve our health and how we perceive our surroundings.

First up is Kostas Kostarelos of the UK’s University of Manchester, who talks about the exciting role that graphene can play in the development of medical devices that connect to the brain with minimal invasiveness. He also chats about his involvement with the Spanish company Inbrain Neuroelectronics, which is developing graphene-based technologies for treating epilepsy, Parkinson’s disease, and other brain related disorders.

Then Ben de Mayo of the University of West Georgia in the US takes over with a lively discussion about the science of sight and sound. De Mayo has just published the second edition of his book The Everyday Physics of Hearing and Vision and chats about how technologies such as cochlear and retinal implants work. He also talks about the amazing optical and acoustic capabilities of the mantis shrimp and the future of sensory augmentation.

Record-breaking gamma ray is smoking gun for Milky Way cosmic rays

The most energetic gamma ray ever seen could be the strongest evidence yet that high-energy cosmic rays are produced within our Milky Way galaxy, where they spend millions of years accumulating, forming a “cosmic-ray pool”.

The origin of cosmic rays is one of the most enduring mysteries in astrophysics. Cosmic rays are charged particles or atomic nuclei moving at relativistic speeds. While the Sun produces low-energy cosmic rays, the most powerful originate from beyond our solar system, but their source has been a matter of debate.

Cosmic rays are easily deflected by galactic magnetic fields, making it difficult to trace them back to their source. However, when cosmic rays collide with other particles in interstellar space, they result in the production of gamma rays, which are not deflected.

Now, the Tibet ASγ Collaboration, which has hundreds of detectors located on the Tibetan Plateau, has observed 23 extremely high-energy gamma rays, with energies ranging from 400 to 955.7 TeV, the latter being the most energetic gamma ray ever detected. Gamma rays formed in this manner are about an order of magnitude less energetic than their cosmic-ray parents, which means that those cosmic rays reached energies far in excess of one peta-electronvolt (1015 eV). For this reason, the sources of these cosmic rays are referred to as PeVatrons.

This discovery “proves that gamma rays with energies up to a few hundred TeV really exist”, says Jing Huang of the Chinese Academy of Sciences in Beijing, who is a member of the collaboration. It also strongly implies that PeVatrons are located inside our galaxy.

Tibet air shower array

A limit to how far we can see

Astrophysicist David Hanna of McGill University, who was not involved in the work, agrees that “The fact that the arrival directions of most of the gamma rays seem to line up with the Milky Way argues for their production there.”

Hanna does not rule out the existence of extragalactic sources, however, pointing out that there will be observational bias. The higher the energy of gamma ray, the more likely they are able to collide with lower energy photons in space, and therefore space becomes opaque to those higher energies over large distances.

Consequently, “we can’t see as far at high energies as at low energies,” explains Hanna. Many of the higher energy gamma rays from extragalactic PeVatrons may just not be reaching us, although Hanna does note that a few of the gamma rays detected by the Tibet ASγ Collaboration don’t align with the Milky Way. These could be spurious background events, he says, or be truly extragalactic.

Footprints of dinosaurs

Adding to the intrigue is the fact that the distribution of the gamma rays across the Milky Way seems random. Supernovae remnants, intense star-forming regions and active black holes have all been mooted as possible PeVatrons, but none are found in the locations the gamma rays are coming from.

Instead, the team of scientists behind the Tibet ASγ Collaboration think that what they are seeing is evidence for a cosmic-ray pool in our galaxy. The idea is that cosmic rays become contained in our galaxy by the Milky Way’s powerful magnetic fields, and they circle the galaxy for millions of years before coincidentally colliding with an atom or molecule in interstellar space, releasing a gamma ray. These gamma rays could therefore be coming from the locations where a collision has taken place. And by the time they undergo this collision, the original PeVatrons that released the cosmic rays could be long dead.

“Metaphorically speaking, we found footprints of dinosaurs in the Milky Way – a lot of extinct PeVatrons in the galaxy,” says Masato Takita of the Institute for Cosmic Ray Research at the University of Tokyo.

Not all PeVatrons are extinct, however. For example, earlier this year the Tibet ASγ Collaboration detected gamma rays with energies up to 100 TeV originating from the supernova remnant G106.3+2.7, which is just 2600 light-years away.

The next step is to extend the survey of gamma rays into the southern hemisphere sky, including the direction of the galactic centre, which cannot be seen well from Tibet. The Pierre Auger Observatory in Argentina can detect gamma rays with energies of hundreds of TeV and has hunted for gamma rays even more powerful, while the Large High Altitude Air Shower Observatory in China, which has just begun observing, may be able to detect gamma rays with energies above 1 PeV.

The findings are published in Physical Review Letters.

Connecting the dots to artificially restore vision

A team of researchers from the Ecole Polytechnique Federale de Lausanne has developed a retinal implant that transposes images acquired by camera-equipped smart glasses into a simplified, black and white image made from 10,500 pixels. Although it has not been approved for human trial yet, the team has tested the implant in both a mouse model and a dedicated virtual reality programme, reporting the findings in Communications Materials.

For many patients suffering from retinitis pigmentosa – an inherited disease where progressive loss of retinal photoreceptors eventually leads to blindness – current retinal implants do not provide clear benefits. In fact, three years after surgery, most patients have stopped using them.

Two limiting parameters are often cited as the reason for the interruption: a small field vision angle (usually limited to 20°) and coarse visual resolution (less than 100 pixels in the most commonly used implant). These require the patient to constantly scan their environment to recreate a mental map of their surroundings, which is impractical and cognitively exhausting.

One electrode: one pixel

To tackle these limitations, Diego Ghezzi and his team developed POLYRETINA, a wide-field high-density epiretinal prosthesis that can be implanted at the back of the retina, close to the optic nerve. The implant contains 10,498 photovoltaic pixels (80-µm diameter, 120-µm pitch) distributed in a tiled fashion over a 13 mm-diameter active area, and provides a 43° vision angle.

A camera embedded in the smart glasses captures images in the wearer’s field-of-vision and sends the data to a microcomputer placed in one of the glasses’ end-pieces. The data are then turned into light signals that are transmitted to the 10,498 electrodes of the retinal implants, creating a star-spangled-sky-like version of the image.

The team conducted a battery of tests to ensure that the implant was fit for purpose. Combining conjugated polymers and less rigid substrates, for example, allowed for a wider coverage of the retinal surface. However, the main question was how many electrodes the prosthesis should contain: a small number would not significantly improve resolution compared with existing implants; a large number increases risks of crosstalk with neighbouring pixels.

By firing combinations of pixels of increasing pattern complexity, the researchers confirmed that even when using 10,498 electrodes, the voltage generated by each pixel is sharply discriminated from its neighbouring pixels and does not show a voltage summation effect. This was observed even in the most extremes cases where a central pixel is off while the surrounding eighteen pixels are on.

Retinal prosthesis

Virtual reality while waiting for human trials

The researchers performed further experiments ex vivo on a mouse model of retinitis pigmentosa and showed that each electrode could reliably produce a dot of light in the retina.

“We wanted to make sure that two electrodes don’t stimulate the same part of the retina. So we carried out electrophysiological tests that involved recording the activity of retinal ganglion cells [a type of neuron at the inner surface of the retina]. And the results confirmed that each electrode does indeed activate a different part of the retina,” explains Ghezzi.

Currently, the team is awaiting approval to test their prosthesis in humans. Meanwhile, to continue testing the implant, they have developed a virtual reality programme that recreates what the patient would see using their prosthetic. The simulations confirmed the ability of the current setup to generate perceptible images and the implant’s readiness for clinical trials.

Quantum dot array could make ultralow-energy switches

Interactions between matter and light in microcavities made of mirrors are fundamentally important for many modern technologies, including lasers. Researchers at the University of Michigan, Ann Arbor, US, have now gained tighter control of these interactions by exploiting a nonlinear effect that occurs in a new kind of hybrid semiconductor made from bilayers of two-dimensional materials. These semiconducting sheets form an egg-carton-like array in which the “pockets” are quantum dots that can be controlled using light, and they could be used to make ultralow-energy switches.

Led by Hui Deng, the researchers made their hybrid semiconductor from flakes of tungsten disulphide (WS2) and molybdenum diselenide (MoSe2) just a few atoms thick. In their bulk form, these transition-metal dichalcogenides (TMDCs) act as indirect band-gap semiconductors. When scaled down to a monolayer thickness, however, they behave as direct band-gap semiconductors, capable of efficiently absorbing and emitting light.

When laid on top of one another, the electronic structures of TMDCs can form a larger electron lattice (known as a moiré lattice) thanks to the slight mismatch of the materials’ lattice constants. The period of this lattice can be tuned by twisting the monolayers with respect to each other at different angles. In the WS2 and MoSe2 bilayer studied in this work, this angle is about 56.5° and the moiré lattice produced contains “pockets” measuring around 10 atoms across. These pockets, explains study lead author Long Zhang, are the quantum dots – tiny pieces of semiconducting materials that can isolate individual quantum particles such as electrons.

Quantum dots confine excitons

In Deng and colleagues’ experiments, the “particles” thus isolated are excitons: particle-like excitations (quasiparticles) created when an electron in a semiconductor’s valence band is excited by a photon to the conduction band. A positively charged “hole” is then left behind in the valence band in the place of the electron. Because the electron remains strongly attracted to the hole, the two “pair up” and behave like a single entity – the exciton.

In conventional, linear, devices, excitons can travel freely throughout a device, so they hardly interact with each other. If the exciton is confined to a quantum dot, however, as in this new work, it is impossible to add a second identical exciton to the same quantum dot, explains Deng. To do this, a higher energy photon is required. “This is known as quantum blockade and causes the nonlinearity we have seen in our experiments,” she adds.

Since quantum dots are only a few atoms across, they are too small for practical applications. Deng and colleagues therefore created an array of quantum dots that they describe as contributing to the nonlinearity “all at once”.

Mirrored microcavity

To control the arrays of dots as a group using light inside the 2D semiconductors, the researchers built a resonator by embedding the 2D hybrid semiconductor between two mirrors that form a microcavity. When they excited the structure with red laser light, they found that it resonated within the cavity and formed another quasiparticle, called a polariton, which is a hybrid of an exciton and light. This observation, they explain, confirms that all the quantum dots are interacting with light in concert.

When the researchers then introduced a few excitons into the material lattice, they observed a measurable change of the polariton’s energy. This implies that the system is showing nonlinear behaviour due to quantum blockade, Deng says.

“Engineers can use that nonlinearity to discern the energy deposited into the system, potentially down to that of a single photon, which makes the system promising as an ultralow-energy switch,” she explains.

The researchers say their work, which they report in Nature, might be extended to achieve polariton blockade similar to the exciton blockade seen in their experiments. “We also plan to increase the nonlinearity we have observed by varying the moiré lattice and reducing the size of the cavity, and look for ways to create quantum states of light from the system,” Deng tells Physics World.

Certifiable quantum random number generation picks up the pace

Researchers at Japan’s Nippon Telegraph and Telephone Corporation (NTT) have built a quantum random number generator (QRNG) that delivers random bits periodically with high speed and is robust against noise that would otherwise compromise the bits’ security. Where previous QRNGs needed to run for a long time before they could generate random bits at high average rates, Yanbao Zhang and colleagues devised a way to do away with this so-called “latency” and fight against imperfections in their QRNG device. These innovations made it possible to certify random bits in less time. Their QRNG could find application in computation and communication networks, where low-latency random number generation is necessary for high-speed encryption.

Randomness is key to many applications, including numerical simulations, statistical sampling, and cryptography. Simulations and sampling require high-speed, high-rate random number generation, while cryptography prizes secure (certifiable) random bits.

Since quantum measurement is inherently probabilistic, quantum mechanics naturally lends itself to random number generation. The distinguishing feature of QRNGs lies in the fact that output random bits are certifiable based only on measurement observations with verifiable physical conditions. “One can certify that the random bits generated by a QRNG are pretty close to the ideal random bits that are completely unknown by an external adversary who may hold additional information about the QRNG device,” Zhang explains.

Image showing the setup of the QRNG, in which a pulse from a quantum light source passes through a Mach-Zehnder interferometer and is detected using a pair of single-photon detectors.

Low latency despite adversarial attack

To reduce the latency of their device, the NTT team developed an efficient method for certifying quantum randomness against both classical and quantum adversaries. A quantum adversary is defined as someone who has access to quantum resources, including quantum memories that store an arbitrary state entangled with the state prepared in the experiment. A classical adversary, in contrast, can only store a classical description of measurement results. Zhang and colleagues demonstrated that their device could certify a block of 8,192 random bits every 0.1 seconds with high security against all quantum adversaries, or a block of 2 x 8,912 random bits against all classical adversaries.

Besides reducing latency, the new method has a further advantage: it requires neither the source of the random numbers nor the measurement apparatus to be characterized in full. Therefore, practical security with realistic devices is guaranteed. In contrast, previous methods for certifying randomness against quantum adversaries addressed imperfections in either the source or measurement, but not both.

Now that they have realized a high-speed, high-security QRNG, Zhang and colleagues want to reduce the size of their QRNG so that it can be used in mobile phone technology. They also suggest that the QRNG they developed could be used to build high-speed randomness servers (beacons) that periodically produce fixed blocks of certifiable and public random bits, which would be a boon to communication networks.

Scientists refuse to be cowed by the livestock methane problem

Cows and other livestock emit a surprisingly large quantity of methane, a powerful greenhouse gas. Globally, the livestock sector accounts for the equivalent of seven gigatonnes of carbon dioxide emissions every year. To put that in perspective, that is 15% of all emissions linked with human activities, and it is comparable to the amount emitted by cars.

This video looks at how scientists are involved in both quantifying the problem and providing solutions. To find out more, read the article ‘Battling bovine belching: measuring methane emissions from cows’ by science writer Michael Allen, originally published in the April 2021 issue of Physics World.

The muon’s theory-defying magnetism is confirmed by new experiment

A long-standing discrepancy between the predicted and measured values of the muon’s magnetic moment has been confirmed by new measurements from an experiment at Fermilab in the US. The 200-strong Muon g–2 collaboration has published a result consistent with data collected two decades ago by an experiment bearing the same name at the Brookhaven National Laboratory, also in the US. This pushes the disparity between the experimental value and that predicted by the Standard Model of particle physics up to 4.2σ, suggesting that physicists could be close to discovering new fundamental forces or particles.

The muon, like its lighter and longer-lived cousin the electron, has a magnetic moment due to its intrinsic angular momentum or spin. According to basic quantum theory, a quantity known as the “g-factor” that links the magnetic moment with the spin should be equal to 2. But corrections added to more advanced theory owing to the effects of short-lived virtual particles increase g by about 0.1%. It is this small difference – expressed as the “anomalous g-factor”, a = (g – 2)/2 – that is of interest because it is sensitive to virtual particles both known and unknown.

In 1997–2001, the Brookhaven collaboration measured this quantity using a 15 m-diameter storage ring fitted with superconducting magnets that provide a vertical 1.45 T magnetic field. The researchers injected muons into the ring with their spins polarized so that initially the spin axes aligned with the particles’ forward direction. Detectors positioned around the ring then measured the energy and direction of the positrons generated by the muons’ decay.

Spin precession

Were there no anomalous moment, the magnetic field would cause the muon spins to precess such that their axes remain continuously aligned along the muons’ direction of travel. But the anomaly causes the rate of precession to slightly outstrip the muons’ orbital motion so that for every 29 trips around the ring the spin axes undergo about 30 complete rotations. Because the positrons have more energy on average when the spin aligns in a forward direction, the intensity of the most energetic positrons registered by the detectors varies cyclically – dropping to a minimum after about 14.5 revolutions and then rising back up to a maximum. It is this frequency – the number of such cycles per second – that reveals the precise value of a.

When the Brookhaven collaboration announced its final set of results in 2006, it reported a value of a = 0.00116592080 and an error of 0.54 parts per million (ppm) – putting it at odds with theory by between 2.2–2.7σ. That discrepancy then rose as theorists refined their Standard Model predictions, so that it currently stands at about 3.7σ. The latest measurements extend the disparity still further.

The recent measurements were made using the same storage ring as in the earlier work – the 700 tonne apparatus was transported in 2013 over 5000 km (via land, sea and river) from Brookhaven near New York City to Fermilab on the outskirts of Chicago. But while the core of the device remains unchanged, the uniformity of the magnetic field that it produces has been increased by a faxtor of 2.5  and the muon beams that feeds it are purer and more intense.

Avoiding human bias

The international collaboration at Fermilab has so far analyzed the results from one experimental run, carried out in 2018. It has gone to great lengths to try and avoid any sources of human bias, having even made its experimental clock deliberately out-of-synch to mask the muons’ true precession rate until the group’s analysis was complete.

Describing its results in Physical Review Letters, alongside more technical details in three other journals, the collaboration reports a new value for a of 0.00116592040 and an uncertainty of 0.46 ppm. On its own, this is 3.3σ above the current value from the Standard Model and slightly lower than the Brookhaven result, but consistent with it. Together, the results from the two labs yield a weighted average of 0.00116592061, an uncertainty of 0.35 ppm and a deviation from theory – thanks to the smaller error bars – of 4.2σ. That is still a little short of the 5σ that physicists normally consider the threshold for discovery.

Tamaki Yoshioka of Kyushu University in Japan praises Fermilab Muon g–2 for its “really exciting result”, which, he says, indicates the possibility of physics beyond the Standard Model. But he argues that it is still too early to completely rule out systematic errors as the cause of the disparity, given that the experiments at both labs have used the same muon storage ring. This, he maintains, raises the importance of a rival g–2 experiment under construction at the Japan Proton Accelerator Research Complex in Tokai. Expected to come online in 2025, this experiment will have quite different sources of systematic error.

Alternative theory

Indeed, if a group of theorists going by the name of the Budapest-Marseille-Wuppertal Collaboration is correct, there may be no disparity between experiment and theory at all. In a new study in Nature, it shows how lattice-QCD simulations can boost the contribution of known virtual hadrons so that the predicted value of the muon’s anomalous moment gets much closer to the experimental ones. Collaboration member Zoltan Fodor of Pennsylvania State University in the US says that the disparity between the group’s calculation and the newly combined experimental result stands at just 1.6σ.

The Fermilab collaboration continues to collect data and plans to release results from at least four more runs. Those, it says, will benefit from a more stable temperature in the experimental hall and a better-centred beam. “These changes, amongst others,” it writes, “will lead to higher precision in future publications.”

Copyright © 2026 by IOP Publishing Ltd and individual contributors