Skip to main content

Gravitational effects could shed more light on the Hubble tension

There are today two main ways to measure the Hubble constant, which is a parameter that describes the rate at which the universe is expanding. However, these two techniques produce conflicting results This discrepancy is called the Hubble tension and it suggests that we may be missing something fundamental about how the universe works. Now, two independent groups of astronomers, one in the US and the other in Germany, are developing two new methods to measure the Hubble constant. One uses gravitational waves; and the other uses gravitationally-lensed supernovae. Their work could help resolve the Hubble tension.

We know that the universe has been expanding ever since the Big Bang nearly 14 billion years ago – in part, thanks to observations made in the 1920s by the American astronomer Edwin Hubble. By measuring the redshift of various galaxies, he discovered that galaxies further away from Earth are moving away faster than galaxies that are closer to us. The linear relationship between this speed and the galaxies’ distances is defined by the Hubble constant, H0.

While there are many techniques for measuring H0, the problem is that different techniques yield different values. One main approach involves the European Space Agency’s Planck space telescope, which measures the Cosmic Background Radiation (CMB) “left over” from the Big Bang. This produces a value of H0 of about 67km/s/Mpc, where 1 Mpc is 3.3 million light–years. The other main approach is the “cosmic distance ladder” measurement, such as that made by the SH0ES collaboration involving observations of type Ia supernovae, which says H0 is about 73 km/s/Mpc.

Much brighter than typical supernovae

Now, astronomers at the Technical University of Munich, the Ludwig Maximilians University and the Max Planck Institutes for Astrophysics and Extraterrestrial Physics have observed an extremely rare type of supernova – or stellar explosion – that was gravitationally lensed, which by itself is also a very rare phenomenon. The supernova, which is called SN 2025wny (or more affectionately “SN Winny”), is superluminous and therefore much brighter than most gravitationally lensed supernovae discovered to date. This means that it can be studied using ground-based telescopes. Indeed, the researchers, led by Stefan Taubenberger observed it with the Nordic Optical Telescope and the University of Hawaii 88-inch Telescope.

“It was an extraordinary coincidence that the first well-resolved lensed supernova found from the ground turned out to be a superluminous supernova,” says Taubenberger. “Its initial spectrum did not match the types of supernova we expected (that is, Type Ia or Type IIn), so determining its redshift was also difficult without this clear classification. We eventually measured the redshift to be equal to two so the observed optical light had actually been emitted as energetic UV radiation. The extraordinary UV brightness then allowed us to identify the object as being a superluminous supernova.”

The fact that the supernova can be clearly observed from here on Earth makes it useful for a technique called time-delay cosmography. This method, which dates from 1964, exploits the fact that massive galaxies can act as lenses, deflecting the light from objects behind them so that from our perspective, these objects appear distorted. “This is called gravitational lensing and we actually see multiple copies of the objects,” Taubenberger explains. “The light from each of these will have taken a slightly different pathway to reach us, so we see them at different times. In the case of SN 2025wny, we observed five copy objects that had been deflected by two galaxies in the foreground.”

If we measure the difference in the arrival times of these objects and combine these data with estimates of the distribution of the mass of the deflecting lens galaxies, we can calculate the so-called time-delay distance, he explains. “From the time-delay distance and the redshift, we can then infer H0. Unlike the cosmic distance ladder, which involves many calibration steps and can accumulate errors with each step, this is a one-step technique with fewer and completely different sources of systemic uncertainties.”

Making the observations was not without a number of challenges, he remembers. “Initially, we had secured observing time at southern hemisphere telescopes (in particular, the ESO [European Southern Observatory] in Chile). However, the object we discovered was in the northern sky, making this secured time unusable. This meant we had to quickly find alternative observatories and write new proposals for northern hemisphere follow-up observations.”

Using undetectable black hole collisions

Meanwhile, a team of astrophysicists at The Grainger College of Engineering at the University of Illinois Urbana-Champaign and the University of Chicago has developed a way to determine the Hubble constant using gravitational waves and in particular the gravitational-wave background. Gravitational waves are generated when compact astrophysical objects, such as black holes, collide. These collisions, which are extremely energetic, produce tiny ripples in the fabric of space–time that travel at the speed of light, eventually reaching us here on Earth where they are detected by the LIGO–Virgo–KAGRA (LVK) Collaboration.

Individual black hole collisions have been observed by the LVK, which allows us to determine the rates of those collisions happening across the universe, explains study leader Bryce Cousins, who is at Illinois. “Based on those rates, we expect there to be a lot more events that we can’t observe. This is called the gravitational-wave background.”

Their approach uses a unique, previously unexplored relationship between the gravitational-wave background and H0.  This relationship is not found in other astrophysical phenomena, meaning that the method is complementary to existing electromagnetic and gravitational-wave measurements of H0.

An upper limit on the background can provide a lower limit on the Hubble constant

The strength of this gravitational-wave background scales directly with the density of gravitational waves in the universe, he says. “For example, if the universe were expanding more slowly, then it would have a smaller total physical volume and a correspondingly higher density of gravitational waves, leading to a stronger background. Thus, an upper limit on the background can provide a lower limit on the Hubble constant.”

The researchers demonstrated their hypothesis by analysing gravitational-wave data from the LVK Collaboration’s third observing run. They have dubbed their method the “stochastic siren” since the gravitational waves (the “sirens”) composing the background arise randomly.

The LVK network is not yet sensitive enough to detect the gravitational-wave background, but researchers expect it will be able to within the next six years or so. However, when Cousins and colleagues’ new work is combined with existing “spectral siren” measurements, the result is a more accurate value of H0 – even without a detection of the gravitational-wave background. As a result, the new technique should only improve as gravitational-wave detectors become more sensitive. The spectral siren approach measures the Hubble constant by considering the redshift of gravitational-wave signals.

Cousins says he is “hopeful” that the findings of gravitational-wave cosmology will be able shed more light on the Hubble tension as gravitational-wave data collection continues.

The researchers are now extending their method to consider other dark energy models, in light of ongoing findings that the standard “cosmological constant” interpretation of dark energy may be incorrect. Cousins is also applying the existing analysis to the latest gravitational-wave dataset and working with other collaborators to modify the stochastic siren procedure so that it can be applied to the next-generation of gravitational-wave detectors.

Two different but complementary techniques

Taubenberger says that Cousins and colleagues’ technique is trying to measure the Hubble constant in a completely different way to his group’s – and also without relying on the cosmic distance ladder. “Since some gravitational waves have no optical counterpart, you cannot take an optical spectrum of them and measure their redshift, so methods like theirs allow us to measure distances in a statistical sense by analysing multiple objects and glean information about the Hubble constant in this way.

“Every independent approach to measure the Hubble constant is welcome, of course.”

Cousins, for his part, says that Taubenberger and colleagues’ work effectively supports an existing method with new data, while his group’s work involves creating a new method that can use existing data. “Taubenberger and his team exclusively use electromagnetic data, which differs from our gravitational wave method, but our approaches are ultimately complementary since they are independent takes on the same underlying question.

“It is interesting and important work since they have found a unique candidate for time-delay cosmography. I am excited to find out what new Hubble constant constraints will come from using this new lensed supernova.”

Quiz of the week: how long will NASA’s Artemis II mission to the Moon last?

Fancy some more? Check out our puzzles page.

Biomedical optics play crucial roles across medicine

PMB 70th anniversary logo

This episode of the Physics World Weekly podcast features Brian Pogue, who is professor of biomedical engineering at Dartmouth College in the US. He is also the co-founder of several start-up companies that are developing optics-based systems for medicine.

In conversation with Physics World’s Tami Freeman, Pogue explains that optical technologies underlie many of today’s routine medical procedures. The field of optics is also converging with the world of medical physics, and Pogue talks about exciting new techniques for guidance, dosimetry and in vivo verification of radiation therapy cancer treatments.

This podcast is supported by One Physics, your trusted, local partner in medical physics and radiation safety.

NASA launches crewed Artemis II mission to the Moon

NASA has successfully launched four astronauts on a 10-day mission to the Moon. The crew – Reid Wiseman, Victor Glover, Christina Koch and Jeremy Hansen – were aboard the Orion spacecraft that was launched yesterday by a Space Launch System rocket from NASA’s Kennedy Space Center in Florida.

The mission is the first crewed lunar flyby in more than 50 years but it also represents a number of significant firsts with Koch, Glover and Hansen set to be the first woman, Black person and Canadian, respectively, to travel to the Moon.

Following launch, the Orion capsule was put into Earth orbit and after five hours into the flight, the craft deployed four CubeSats – from Argentina’s Comisión Nacional de Actividades Espaciales; the German Aerospace Center; the Korea AeroSpace Administration; and the Saudi Space Agency – that will conduct scientific investigations and technology demonstrations.

The craft is now set to carry out a six-minute rocket firing that will send the spacecraft towards the Moon.

During a lunar flyby on 6 April, the astronauts will take photographs and provide observations of the Moon’s surface being the first people to see some areas of the far side.

Some four days later, the craft will then return to Earth and splash down in the Pacific Ocean.

This mission follows the Artemis I mission, which carried a simulated crew of three mannequins wired with sensors, that completed a flyby of the Moon in 2022.

Artemis III, meanwhile, is currently ear-marked for launch in 2027, planning to be the first crewed lunar landing since the Apollo missions in the 1960s and 70s.

Will the Artemis programme instil the same sense of awe as the Apollo missions?

In the summer of 1969 I was four years old and I have a very distinct memory of my mother calling me and my brother in from the garden to watch something on television. That something had to do with NASA’s Apollo 11 mission to the Moon.

For years, I thought that I had watched Neil Armstrong take his first steps on the Moon on live TV. I now realize that the timing was all wrong. I was in Montreal and it was daytime – whereas the walk occurred at about 11 p.m. EDT, well after my bedtime. So I was (probably) not one of the estimated 500 million people worldwide (including Pope Paul VI) who witnessed this momentous event as it happened.

Regardless of whether I watched it live or not, the first human steps on the Moon made a great impression on me – and who knows, maybe that early exposure to the cutting edge of science and technology encouraged me to pursue a career in physics.

I could be wrong, but I don’t think that the Artemis missions will instil the same awe in people as did the Apollo missions. I didn’t watch the Artemis II launch and I had a distinctly “been there, done that” feeling when I heard about its success.

Indeed, I have been left wondering exactly why the US has decided to return to the Moon now. Is it for reasons of science and exploration (possibly setting the scene for a human mission to Mars), or is this more about nationalism and colonialization? I hope it is the former, because for me sending humans to the Moon and beyond is akin to blue-sky research in physics – probing the universe to expand knowledge, with the confidence that this will result in a better world.

Hamish Johnston is an online editor of Physics World

Word flower puzzle no. 2

How did you get on?

14 words Warming up nicely

20 words Getting hot, hot, hot

26 words Top dog!

Fancy some more? Check out our puzzles page.

Trapped ion quantum technology gets smaller

A new integrated photonics platform can perform precision quantum experiments that were previously only possible with multiple table-top lasers and other bulky apparatus. According to its US-based developers, the new chip-scale device could find applications in quantum computing and portable optical clocks based on trapped ions.

Today’s quantum computers and optical clocks depend on a range of equipment that typically includes some combination of lasers, cryogenic coolers, vacuum chambers and optical reference cavities. The last of these can take up more than half the device’s total volume, and they are crucial for stabilizing laser frequencies to the high precision required for controlling the quantum states of trapped ions. Such ions can serve as quantum bits (qubits) in quantum computing and can also be used for precision timekeeping in optical clocks. In the latter case, each clock “tick” is defined by the frequency of the light the ions absorb and emit as they undergo a specific, sub-Hz transition (the so-called “clock transition”) between atomic energy levels.

Miniaturizing large laser systems

Researchers led by Daniel Blumenthal of the University of California Santa Barbara (UCSB) and Robert Niffenegger at the University of Massachusetts Amherst have now shown for the first time that these large, stabilized laser systems can be replaced with small photonic chips. They used these chips to prepare and control the quantum state of strontium ions at room temperature as well as driving the clock transition. Though the fidelity of the system is not yet high enough to compete with the best traditionally-constructed devices, Niffenegger describes it as a critical first step for producing next-generation clocks and future quantum computers with millions of qubits. “Reaching such a goal will only be possible with such integrated quantum systems on a chip,” he explains.

Blumenthal, Niffenegger and colleagues used two components to create their chip-based stabilized laser: an integrated Brillouin laser with a wavelength of 674 nm, connected to an integrated 674 nm, 3 m long coil resonator cavity. The team characterized the stability of this laser and coil by measuring the 0.4 Hz quadrupole optical clock transition in strontium-88 (88Sr+) ions trapped at an electrode located on a single surface electrode trap (SET) chip. This transition is one of the most precise used by quantum researchers today, and its narrow linewidth makes it relatively easy to measure using high-resolution trapped ion spectroscopy.

“The fact that these results were achieved with the SET at room temperature is remarkable given the precision of the transition, and is a major step forward in realizing portable versions of this quantum technology,” Blumenthal says.

Making optical clocks more portable and robust

As well as being smaller than traditional lasers, the chip’s 674-nm Brillouin laser light also removes the need for bulky frequency conversion equipment. A further advantage is its reduced high-frequency noise, which is important for clock acquisition and qubit state preparation fidelity, and which cannot be achieved using standard electronic feedback loops. The coil, for its part, reduces mid- and low-frequency noise, stabilizing the laser’s carrier frequency even further so that it can be locked to the precision sub-Hz trapped-ion clock transition.

According to Niffenegger, this combination of improvements enabled the team to achieve a frequency noise profile and so-called Allen deviation (a measure of stability) of just of 5.3 × 1013 – an unprecedented figure for a room-temperature chip. “We can therefore prepare qubit states with high fidelity and interrogate the clock transition, which is essential for quantum computing applications,” he says.

As optical clocks become more portable and robust, they become more feasible for a greater variety of applications. The ultimate goal, says Blumenthal, is to reach a stability range of 10-14 to 10-16, which would allow optical clocks to replace GPS-based navigation on missions to the Moon and Mars. “Such clocks could also help advance fundamental science – for example, by mapping gravity and measuring orbit time around Earth for climate science, detecting gravitational waves and dark matter/energy and for general relativity measurements, to name just a few,” he explains.

Niffenegger says it is now feasible to scale the team’s integrated platform to a grid of 100 or more ions, to further improve performance. He and his colleagues are now working to integrate other experimental components (including the ion trap chip, the optical cavity chip and other photonics) onto a single, full-architecture chip that builds on their current designs. “Preliminary results already show improved performance, with further exciting developments anticipated soon,” they tell Physics World.

The present work is detailed in Nature Communications.

Counting photons could redefine the future of CT imaging

Photon-counting computed tomography (PCCT) is an advanced medical imaging technique that differs from conventional X-ray CT in that it can discriminate between the energies of individual detected photons. Offering higher spatial, spectral and contrast resolution than conventional CT, PCCT could deliver significant benefits for disease characterization and enable new diagnostic approaches.

Conventional CT measures the attenuation of X-rays after they pass through the body, enabling clinicians to monitor normal and abnormal anatomy and providing valuable information for diagnosis and treatment of disease. The advantages promised by PCCT primarily arise from the differing characteristics of the detectors: conventional CT scanners use energy-integrating detectors (EIDs) whilst PCCTs employ photon-counting semiconductor detectors.

The effective dose from diagnostic CT procedures is estimated to be in the range of 1–10 mSv, although this can vary by a factor of 10 or more depending on patient size, the type of CT scan performed, the CT system and the operating technique. PCCT systems offer better dose efficiency than conventional CT and use energy thresholding to eliminate background electrical noise. As a result, PCCT requires lower radiation dose than standard CT – reducing the risk to the person being scanned.

Detector characteristics: limitations and advantages

Conventional CT systems use an EID to collect the total energy deposited by all incident X-ray photons. EIDs are typically composed of gadolinium oxysulfide (Gd2O2S) or cadmium tungstate (CdWO4) and comprise two layers: a solid-state scintillator placed on top of a photodiode array. The detection mechanism is a two-step, indirect process. Incoming photons hit the scintillation layer, which produces a flash of visible light. When the photodiode absorbs this light, it converts it into an electrical signal.

The photodiode array consists of individual detector elements separated by opaque, reflective walls called septa. This design prevents optical cross talk (signals transferring between adjacent channels and reducing image quality) produced by light scattering. The need for septa, however, creates “dead space” on the detector surface, which wastes X-ray dose and limits the spatial resolution since it physically restricts detector size.

As EIDs collect the total energy from all incoming photons, signals from photons of different energies are mixed together. High-energy photons will generate a higher light intensity than low-energy photons and will consequently produce a higher intensity electrical signal. This means that the final output signal will be dominated by the high-energy photons and under-weight the valuable contrast information that the low-energy photons provide. It also prevents the distinction between electrical noise and genuine low-energy photons, which further affects the achievable contrast.

CT detector schematics

PCCT scanners, on the other hand, employ photon-counting detectors that directly convert the photon energy to electric signals. These detectors consist of a semiconductor layer placed between a cathode on the upper side and an anode underneath. The anodes are pixellated to increase spatial resolution, with each pixel placed on top of an ASIC.

This detector uses a direct conversion process in which a high bias voltage is applied across the semiconductor to generate electron–hole pairs when struck by an incoming photon. The strong electric field draws the clouds of charge toward the anode electrodes, creating a current. The ASIC instantly processes this current and converts it into a voltage pulse, with the height of the pulse directly proportional to the incident photon’s energy. Comparators and counters sort the photons into energy bins based on threshold values, a process that can also filter out electronic noise and enable spectral imaging.

The semiconducting materials used in photon-counting detectors are typically either cadmium telluride (CdTe), cadmium zinc telluride (CZT) or silicon. The cadmium-based detectors have high stopping powers due to their high atomic number, leading to efficient absorption of X-rays via the photoelectric effect and resulting in a high spatial resolution. Another advantage of CZT and CdTe detectors is that the semiconductor can be relatively thin (roughly 2 mm), allowing the detector to be placed perpendicular to the direction of the incident X-rays.

Advanced spectral capabilities

Conventional CT relies on post-processing software to enhance image resolution and reduce the electronic noise that’s inherent to its physical hardware. But the algorithms traditionally used for image reconstruction – which include back projection, filtered back projection and iterative reconstruction algorithms – can reduce spatial resolution and cause blurring.

Deep learning-based reconstruction, meanwhile, can induce artefacts (such as generating objects that don’t exist or removing true small anatomical structures), particularly in low-dose scenarios where training data are limited. To achieve high resolution in conventional CT, a low-energy filter in the X-ray beam is needed, which increases the required radiation dose.

The PCCT detector design, with small pixel sizes and lack of reflective septa, make it an inherently high-resolution technique. Image quality can be further improved using algorithms such as quantum iterative reconstruction, which has been shown to reduce image noise by up to 34.5%. Sharp convolution kernels (used to optimize the balance between noise and sharpness) are needed to ensure that the image produced maintains the high resolution provided by the detector.

K-edge subtraction imaging

The ability of PCCT to distinguish photon energy also allows for material decomposition, which enables the generation of a range of advanced images. This includes virtual monoenergetic images reconstructed at a single energy level to amplify contrast agents without reducing dose, and virtual non-contrast images, which allow digital subtraction of particular materials without needing another scan. PCCT can also be used for K-edge imaging, in which contrast agents can be isolated based on their isolation of their K-edge energies.

Clinical applications

The technical advantages of PCCT have significantly improved the diagnostic applications of CT across a plethora of medical disciplines.

For instance, a prospective study on 200 adults with lung cancer who underwent both PCCT and EID CT showed that PCCT outperformed conventional CT in lung cancer management. The key findings were that PCCT had a lower effective radiation dose (1.36 mSv) compared with EID (4.04 mSv), lower exposure to iodine (a dye used to increase image contrast), with an iodine load of 20.6 mSv for PCCT (compared with 28.1 mSv for EID CT) and higher detection and diagnostic confidence for enhancement-related malignant features.

Similarly, in a study of CT pulmonary angiography, PCCT reduced the total iodine load by 26.7% and the CT dose index volume by 24.4% compared with EID CT. This potentially lowers patient risk, as well as providing environmental and financial benefits.

Within coronary imaging, PCCT enables characterization of coronary artery disease and plaque and shows promise in coronary artery calcium quantification by reducing blooming artefacts (where small, high-density structures like calcium appear larger than their true size). PCCT can also provide high-resolution imaging of the lumen for evaluation of coronary stents and assessment of myocardial tissue and perfusion.

The higher dose efficiency of PCCT makes it particularly effective in paediatric applications, as children are more radiosensitive than adults. Children also have smaller organs, making the ultrahigh resolution provided by PCCT especially helpful, for example, in the detection of tiny, complex heart defects in neonates and infants.

As of early 2025, there were two US Food and Drug Administration (FDA)-cleared PCCT systems in clinical use: the NAEOTOM Alpha from Siemens Healthineers and Samsung Healthcare’s OmniTom Elite. And just last month, the Extremity Scanner System from MARS Bioimaging and GE HealthCare’s Photonova Spectra photon-counting CT both received FDA clearance. Other clinical prototypes include systems from Canon Medical Systems and Philips Healthcare.

Ongoing challenges

As with any emerging technology, challenges remain to be solved. With photon-counting detectors, these includes effects such as pulse pile-up, charge sharing, K-escape and Compton scattering.

The Photon-counting detector

Pulse pile-up occurs when two or more photons arrive at the detector simultaneously, which may result in it recording this as a single photon. This leads to errors in the calculation of energy received at the detector and determination of the numbers of photons. If a single photon strikes near the boundary between two pixels it may be detected as having lower energy than it actually has. This effect, known as charge sharing, will degrade the spectral and spatial resolution of the CT image.

Due to their high atomic number, cadmium detectors are also susceptible to an effect known as “K-escape”, in which incident X-rays produce fluorescence that’s detected as a separate event. Compton scattering occurs when a secondary photon produced in the semiconductor material is registered as a separate event, underestimating the real energy value.

Finally, manufacturing the semiconductor materials used in PCCT is expensive – PCCT scanners can cost in excess of £2 million. And the large data sets generated by multi-energy scanning require a large amount of computing power and time to process and reconstruct.

Future impact

PCCT is a highly promising technology that replaces traditional indirect detection mechanisms with direct detection using semiconducting materials. PCCT offers superior image quality due to higher spatial and spectral resolution, higher dose efficiency and the ability to perform quantitative imaging. The multi-energy capabilities of PCCT shift the image from providing purely structural information to also include functional information.

Current clinical use is limited mainly due to cost rather than diagnostic capability, with a lack of clinical studies making the high cost difficult to justify. However, the potential impacts for optimizing healthcare could be vast. Perhaps it is inevitable that, as costs decrease with evolving technology, the clinical use of PCCT will overtake conventional CT in the future and become the standard CT technique.

Invisible force of nature: what the wind does for us

In recent years the news has been dominated by devastating hurricanes, cyclones, tornadoes, wildfires and floods, and data show that these hazardous events are increasing in frequency and strength. It is clear that our weather is becoming more extreme, with a warming world adding more energy to the atmosphere and increasing the power of these wind-fuelled events.

With this in mind, Simon Winchester’s opening question in The Breath of the Gods: the History and Future of the Wind might surprise readers: are Earth’s winds slowing down? There was, indeed, a decrease in wind speeds over land between the 1980s and 2010, which was ominously dubbed the Great Stilling. In fact, observations show a decrease in average wind speeds over land of between 5 and 15% over the last 50 years. So what is going on?

Winchester – a writer and journalist with a background in geology – starts his quest to discover more atop the windiest place in the world, the summit of Mount Washington. With delicious irony, he finds the anemometers are still and a very rare calm hangs in the air.

He goes on to build the case for exceptional weather becoming the norm. He covers recent examples of extreme wind events, such as the exceedingly hot and dry Santa Ana winds of January 2025, which fed the dramatic and devastating wildfires that ripped through suburbs of Los Angeles; the record-breaking storms that pounded Europe during 2024 and 2025; and the freak tornado in March 2023 that killed 17 people and razed the town of Rolling Fork, Mississippi, to the ground.

Ever-present element

This book isn’t simply a tour of wind-related disasters, however. Winchester takes us back through thousands of years of human history, to explore how wind influenced some of the earliest civilizations. The first recorded mention of the wind arose 5000 years ago and comes from the ancient kingdom of Sumer (now south-eastern Iraq). People there identified four different prevailing winds and attributed their characteristics to four different gods. This classification system persists to this day, with our familiar north, east, south and west winds originating from these mythological four Mesopotamian winds.

For much of history humans have made use of the wind: from propelling pioneering populations in tiny boats across the Pacific Ocean some 5000 years ago, to enabling human flight; from milling grain and pumping water with windmills, to using them to generate energy. But it is only in more recent times that we have started to map and understand the major winds on our planet and the role they play in making it habitable.

Winchester romps through the science. We learn how the wind has pummelled, shaped and moulded the Earth since time immemorial, and how the winds work in tandem with the oceans, constantly transporting energy from equator to poles and preventing the planet from overheating. He also introduces key characters along the way, such as Brigadier Ralph Bagnold, a British army engineer. Bagnold used wind tunnel experiments and his extensive desert experience to understand the physics of windblown grains and the circumstances that create everything from tiny ripples in sand, to mighty marching barchan dunes.

Not quite blown away

But it is when the wind works against us that its might is truly revealed, and Winchester devotes an entire chapter to inclement winds. He starts by transporting us into the wretched five years of the American Great Depression in the 1930s, when terrible dust storms tore the topsoil from the prairie states of Oklahoma, Texas, Kansas, Colorado and Nebraska, resulting in starvation and mass migration. We hear how the arrival of the settlers and farming technology triggered this tragedy, with steel-bladed ploughs ripping through the soil and tearing up the grasses that had previously glued the soil to the land.

However, this is a tale that ends well, with President Roosevelt taking sound advice and devising an audacious plan to fix it. As a result, some 220 million trees were planted in a series of windbreaks stretching from the Canadian border down to central Texas. These restored prosperous and stable farmland to the American Midwest, and survive to this day.

Writing a book about this invisible force of nature could be stuffy, but Winchester brings his trademark curiosity and storytelling to the fore. He whisks readers through history and around the world, inserting himself into the story and pulling out the human impacts that bring the topic alive.

But while it’s a thoroughly enjoyable read, The Breath of the Gods lacks a thread to hold the book together. And most frustratingly, it fails to really return to answer the opening question about what’s behind the slowing winds. I would have liked a bit more science – particularly in understanding the impact that climate change is having on the wind – but for those looking for an accessible read with lots of fascinating weather anecdotes to regale friends with, this book won’t disappoint.

  • 2025 William Collins 416pp £25hb £11.99ebook

The mathematics of quantum entanglement

Most headline-grabbing advances in quantum mechanics today are experimental in nature: more qubits, entangled particles, fewer errors.

Often overlooked are the advances in the mathematics that underpins the behaviour of these quantum systems.

The walled Brauer algebra is an abstract but increasingly important mathematical structure that appears in quantum information theory whenever physicists study particles, symmetries and transformations involving permutations and partial transposition.

Work in this area inevitably leads to the question of how a system transforms when particles are permuted or when one part of a composite object is flipped (transposed) while the rest is left untouched. Collect all such operations together and you get the walled Brauer algebra. It plays an important role in the mathematical description of problems ranging from entanglement detection to advanced teleportation schemes.

Brauer Algebra

The problem is that this algebra is famously intricate. Until now, physicists have only been able to describe its structure using methods that do not fully align with the natural symmetries of the system, making calculations heavy and sometimes opaque.

The new work changes that. The authors have developed an iterative construction that builds the algebra piece by piece, revealing its architecture in a symmetry-compatible way. Instead of a tangled hierarchy, the algebra unfolds into independent components, each shaped by the action of two symmetric groups.

The result is not just a more elegant mathematical picture; it is also a new framework that can make symmetry-based analysis of complex quantum-information problems more systematic and transparent.

This matters now more than ever. Quantum technologies increasingly involve many-particle configurations where symmetry is both a feature and a challenge. Teleportation schemes that move quantum information without moving particles, algorithms that manipulate unknown quantum operations, and proposals for higher-order quantum processes all rely on understanding how transformations behave under symmetry.

By clarifying this structure, the new framework could help researchers analyse these settings more effectively and support the development of better-controlled entanglement- and teleportation-based protocols.

Read the full article

Iterative construction of group-adapted irreducible matrix units for the walled Brauer algebra – IOPscience

M. Horodecki et al 2026 Rep. Prog. Phys. 89 027601

Revealing the magic in hybrid quantum systems

This property determines whether a quantum system can outperform even the fastest classical supercomputer. Until now, scientists could quantify magic in systems of qubits, but not in systems of bosons such as photons or hybrid devices of coupled bosons and spins, like those used in real quantum hardware.

In this new work, a team of researchers from Taiwan and Japan proposed the first unified way to measure magic in systems that combine both spins and bosons. These hybrid platforms appear everywhere from superconducting circuits to trapped ion quantum processors. However the quantum resources inside them have remained difficult to identify.

The team’s new framework uses the shape of a quantum state in phase space to define a family of magic entropies that apply cleanly to qubits, bosons and crucially, the interactions between them.

To test the idea, the researchers examined the Dicke model, a paradigmatic system in which many spins couple to a single light field. As the system approaches a superradiant phase transition (a dramatic collective reorganisation), the shared non-classical behaviour across both spins and photons (the hybrid magic) peaks at this transition. This provides another way to identify the critical point, alongside familiar tools such as entanglement. Another interesting result is that, in the finite systems studied here, the quantum magic in the spin sector increases sharply, while the bosonic magic saturates to a finite value. This contrast suggests that these measures capture different aspects of the quantum state.

The team also analysed how magic evolves dynamically in the Jaynes–Cummings model, where a single spin and a single photon exchange energy. As the two systems swap excitations, magic flows back and forth, and have different behaviours for bosonic and spin parts, providing a picture of how computational power migrates through a quantum device in real time.

As quantum computers grow more complex, scientists and engineers need reliable ways to diagnose which parts of their machines produce genuine quantum advantage. This new framework gives them a powerful tool to do just that, and it’s one that works not just for qubits, but for the hybrid architectures likely to define the next generation of quantum technologies.

Read the full article

Magic entropy in hybrid spin-boson systems – IOPscience

S. Crew et al 2026 Rep. Prog. Phys. 89 027602

 

Copyright © 2026 by IOP Publishing Ltd and individual contributors