Skip to main content

A cosmic void may help resolve the Hubble tension

A large, low density region of space surrounding the Milky Way may explain one of the most puzzling discrepancies in modern cosmology. Known as the Hubble tension, the issue arises from conflicting measurements of how fast the universe is expanding. Now, a new study suggests that the presence of a local cosmic void could explain this mismatch, and significantly improves agreement with observations compared to the Standard Model of cosmology.

“Numerically, the local measurements of the expansion rate are 8% higher than expected from the early universe, which amounts to over six times the measurement uncertainty,” says Indranil Banik, a cosmologist at the University of Portsmouth and a collaborator on the study. “It is by far the most serious issue facing cosmology.”

The Hubble constant describes how fast the universe is expanding and it can be estimated in two main ways. One method involves looking far into the past by observing the cosmic microwave background (CMB). This is radiation that was created shortly after the Big Bang and permeates the universe to this day. The other method relies on the observation of relatively nearby objects, such as supernovae and galaxies, to measure how fast space is expanding in our own cosmic neighbourhood.

If the Standard Model of cosmology is correct, these two approaches should yield the same result. But, they do not. Instead, local measurements suggest the universe is expanding faster than the expansion given by early-universe data. Furthermore, this disagreement is too large to dismiss as experimental error.

Local skewing

One possible explanation is that something about our local environment is skewing the results. “The idea is that we are in a region of the universe that is about 20% less dense than average out to a distance of about one billion light years,” Banik explains. “There is actually a lot of evidence for a local void from number counts of various kinds of sources across nearly the whole electromagnetic spectrum, from radio to X-rays.”

Such a void would subtly affect how we interpret the redshifts of galaxies. This is the stretching of the wavelength of galactic light that reveals how quickly a galaxy is receding from us. In an underdense (of relatively low density) region, galaxies are effectively pulled outward by the gravity of surrounding denser areas. This motion adds to the redshift caused by the universe’s overall expansion, making the local expansion rate appear faster than it actually is.

“The origin of such a [void] would trace back to a modest underdensity in the early universe, believed to have arisen from quantum fluctuations in density when the universe was extremely young and dense,” says Banik. However, he adds, “A void as large and deep as observed is not consistent with the standard cosmological model. You would need structure to grow faster than it predicts on scales larger than about one hundred million light–years”.

Testing the theory

To evaluate whether the void model holds up against data, Banik and his collaborator Vasileios Kalaitzidis at the UK’s University of St Andrews compared it with one of cosmology’s most precise measurement tools: baryon acoustic oscillations (BAOs). These are subtle ripples in the distribution of galaxies that were created by sound waves in the early universe and then frozen into the large-scale structure of space as it cooled.

Because these ripples provide a characteristic distance scale, they can be used as a “standard ruler” to track how the universe has expanded over time. By comparing the apparent size of this ruler at observed a different distances, cosmologists can map the universe’s expansion history. Crucially, if our galaxy lies inside a void, that would alter how the ruler appears locally, in a way that can be tested.

The researchers compared the predictions of their model with twenty years of BAO observations, and the results are striking. “BAO observations over the last twenty years show the void model is about one hundred million times more likely than the Standard Model of cosmology without any local void,” says Banik. “Importantly, the parameters of all these models were fixed without considering BAO data, so we were really just testing the predictions of each model.”

What lies ahead

While the void model appears promising, Banik says that more data are needed. “Additional BAO observations at relatively short distances would help a lot because that is where a local void would have the greatest impact.” Other promising avenues include measuring galaxy velocities and refining galaxy number counts. “I would suggest that it can be essentially confirmed in the next five to ten years, since we are talking about the nearby universe after all.”

Banik is also analysing supernovae data to explore whether the Hubble tension disappears at greater distances. “We are testing if the Hubble tension vanishes in the high-redshift or more distant universe, since a local void would not have much effect that far out,” he says.

Despite the challenges, Banik remains optimistic. With improved surveys and more refined models, cosmologists may be closing in on a solution to the Hubble tension.

The research is described in Monthly Notices of the Royal Astronomical Society.

Lee Packer: ‘There’s no fundamental physical reason why fusion energy won’t work’

The Cockroft Walton lecture series is a bilateral exchange between the Institute of Physics (IOP) and the Indian Physics Association (IPA). Running since 1998, it aims to promote dialogue on global challenges through physics.

Lee Packer, who has over 25 years of experience in nuclear science and technology and is an IOP Fellow, delivered the 2025 Cockroft Walton Lecture Series in April. Packer gave a series of lectures at the Homi Bhabha Research Centre (BARC) in  Mumbai, the Institute for Plasma Research (IPR) in Ahmedabad and the Inter-University Accelerator Centre (IUAC) in Delhi.

Packer is a Fellow of UK Atomic Energy Authority (UKAEA), in which he works on nuclear aspects of fusion technology. He also works as consultant to the International Atomic Energy Agency (IAEA) in Vienna, where he is based in the physics section of the department of nuclear sciences and applications.

Packer also holds an honorary professorship at the University of Birmingham, where he lectures on nuclear fusion as part of their long-running MSc course in the physics and technology of nuclear reactors.

Below, Packer talks to Physics World about the trip, his career in fusion and what advice he has for early-career researchers.

When did you first become interested in physics?

I was fortunate to have some inspiring teachers at school who made physics feel both exciting and full of possibility. It really brought home how important teachers are in shaping future careers and they deserve far more recognition than they often receive. I went on to study physics at Salford University and during that time spent a year on industrial placement at the ISIS Neutron and Muon Source based at the Rutherford Appleton Laboratory (RAL). That year deepened my interest in applied nuclear science and highlighted the immense value of neutrons across real-world applications – from materials research and medicine to nuclear energy.

Can you tell me about your career to date?

I’ve specialised in applied nuclear science throughout my career, with a particular focus on neutronics – the analysis of neutron transport — and radiation detection applied to nuclear technologies. Over the past 25 years, I’ve worked across the nuclear sector – in spallation, fission and fusion – beginning in analytical and research roles before progressing to lead technical teams supporting a broad range of nuclear programmes.

When did you start working in fusion?

While I began my career in spallation and fission, the expertise I developed in neutronics made it a natural transition into fusion in 2008. It’s important to recognise that deuterium-tritium fuelled fusion power is a neutron-rich energy source – in fact, 80% of the energy released comes from neutrons. That means every aspect of fusion technology must be developed with the nuclear environment firmly in mind.

Why do you like about working in fusion energy?

Fusion is an inherently interdisciplinary challenge and there are many interesting and difficult problems to solve, which can make it both stimulating and rewarding. There’s also a strong and somewhat refreshing international spirit in fusion — the hard challenges mean collaboration is essential. I also like working with early-career scientists and engineers to share knowledge and experience. Mentoring and teaching is rewarding, and it’s crucial that we continue building the pipelines of talent needed for fusion to succeed.

Tell me about your trip to India to deliver the Cockroft Walton lecture series?

I was honoured to be selected to deliver the Cockroft-Walton lecture series. Titled “Perspectives and challenges within the development of nuclear fusion energy”, the lectures explored the current global landscape of fusion R&D, technical challenges in areas such as neutronics and tritium breeding, and the importance of international collaboration. I shared some insights from activities within the UK and gave a global perspective. The reception was very positive – there’s strong enthusiasm within the Indian fusion community and they are making excellent contributions to global progress in fusion. The hosts were extremely welcoming, and I’d like to thank them for their hospitality and the fascinating technical tours at each of the institutes. It was an experience I won’t forget.

What are India’s strengths in fusion?

India has several strengths including a well-established technical community, major national laboratories such as IPR, IUAC and BARC, and significant experience in fusion through its domestic programme and direct involvement in ITER as one of the seven member states. There is strong expertise in areas such as nuclear physics, neutronics, materials, diagnostics, and plasma physics.

Lee Packer meeting officials at BARC

What could India improve?

Where India might improve could be in building further on its amazing potential – particularly its broader industrial capacity and developing its roadmap towards power plants. Common to all countries pursuing fusion, sustained investment in training and developing talented people will be key to long-term success.

When do you think we will see the first fusion reactor supplying energy to the grid?

I can’t give a definitive answer for when fusion will supply electricity to the grid as it depends on resolving some tough, complex technical challenges alongside sustained political commitment and long-term investment. There’s a broad range of views and industrial strategies being developed within the field. For example, the UK Government’s recently published clean energy industrial strategy mentions the Spherical Tokamak for Energy Production programme, which aims to deliver a prototype fusion power plant by 2040 at West Burton, Nottinghamshire, at the site of a former coal power station. The Fusion Industry Association’s survey of private fusion companies reports that many are aiming for fusion-generated electricity by the late 2030s, though time projections vary.

There are others who say it may never happen?

Yes. On the other hand, some point to several critical hurdles to address and offer more cautious perspectives and call for greater realism. One such problem, close to my own interest in neutronics, is the need to demonstrate tritium-breeding blanket-technology systems and to develop lithium-6 supplies at the required scale for the industry.

What are the benefits of doing so?

The potential benefits for society are too significant to disregard on the grounds of difficulty alone. There’s no fundamental physical reason why fusion energy won’t work and the journey itself brings substantial value. The technologies developed along the way have potential for broader applications, and a highly skilled and adaptable workforce is developed with this.

What advice do you have for early-career physicists thinking about working in the field?

Fusion needs strong collaboration between people from across the board – physicists, engineers, materials scientists, modellers, and more. It’s an incredibly exciting time to get involved. My advice would be to keep an open mind and seek out opportunities to work across these disciplines. Look for placements, internships, graduate or early career positions and mentorship – and don’t be afraid to ask questions. There’s a brilliant international community in fusion, and a willingness to support those with kick-starting their careers in this field. Join the effort to develop this technology and you’ll be part of something that’s not only intellectually stimulating and technically challenging but is also important for the future of the planet.

UK ‘well positioned’ to exploit space manufacturing opportunities, says report

The UK should focus on being a “responsible, intelligent and independent leader” in space sustainability and can make a “major contribution” to the area. That’s the verdict of a new report from the Institute of Physics (IOP), which warns, however, that such a move is possible only with significant investment and government backing.

The report, published together with the Frazer-Nash Consultancy, examines the physics that underpins the space science and technology sector. It also looks at several companies that work on services such as position, navigation and timing (PNT), Earth observation as well as satellite communications.

In 2021/22 PNT services contributed over 12%, or about £280bn, to the UK’s gross domestic product – and without them many critical national infrastructures such as the financial and emergency systems would collapse. The report says, however, that while the UK depends more than ever on global navigation satellite systems (GNSS) that reliance also exposes the country to its weaknesses.

“The scale and sophistication of current and potential PNT attacks has grown (such as increased GPS signal jamming on aeroplanes) and GNSS outages could become commonplace,” the report notes. “Countries and industries that address the issue of resilience in PNT will win the time advantage.”

Telecommunication satellite services contributed £116bn to the UK in 2021/22, while Earth observation and meteorological satellite services supported industries contributing an estimated £304bn. The report calls the future of Earth observation “bold and ambitious”, with satellite data resolving “the disparities with the quality and availability of on-the-ground data, exacerbated by irregular dataset updates by governments or international agencies”.

Future growth

As for future opportunities, the report highlights “in-space manufacturing”, with companies seeing “huge advantages” in making drugs, harvesting stem cells and growing crystals through in-orbit production lines. The report says that In-Orbit Servicing and Manufacturing could be worth £2.7bn per year to the UK economy but central to that vision is the need for “space sustainability”.

The report adds that the UK is “well positioned” to lead in sustainable space practices due to its strengths in science, safety and sustainability, which could lead to the creation of many “high-value” jobs. Yet this move, the report warns, demands an investment of time, money and expertise.

“This report captures the quiet impact of the space sector, underscoring the importance of the physics and the physicists whose endeavours underpin it, and recognising the work of IOP’s growing network of members who are both directly and indirectly involved in space tech and its applications,” says Alex Davies from the Rutherford Appleton Laboratory, who founded the IOP Space Group and is currently its co-chair.

Particle physicist Tara Shears from the University of Liverpool, who is IOP vice-president for science and innovation, told Physics World that future space tech applications are “exciting and important”. “With the right investment, and continued collaboration between scientists, engineers, industry and government, the potential of space can be unlocked for everyone’s benefit,” she says. “The report shows how physics hides in plain sight; driving advances in space science and technology and shaping our lives in ways we’re often unaware of but completely rely on.”

Accounting for skin colour increases the accuracy of Cherenkov dosimetry

Cherenkov dosimetry is an emerging technique used to verify the dose delivered during radiotherapy, by capturing Cherenkov light generated when X-ray photons in the treatment beam interact with tissue in the patient. The initial intensity of this light is proportional to the deposited radiation dose – providing a means of non-contact in vivo dosimetry. The intensity emitted at the skin surface, however, is highly dependent on the patient’s skin colour, with increasing melanin absorbing more Cherenkov photons.

To increase the accuracy of dose measurements, researchers are investigating ways to calibrate the Cherenkov emission according to skin pigmentation. A collaboration headed up at Dartmouth College and Moffitt Cancer Center has now studied Cherenkov dosimetry in patients with a wide spectrum of skin tones. Reporting their findings in Physics in Medicine & Biology, they show how such a calibration can mitigate the effect of skin pigmentation.

“Cherenkov dosimetry is an interesting prospect because it gives us a completely passive, fly-on-the-wall approach to radiation dose verification. It does not require taping of detectors or wires to the patient, and allows for a broader sampling of the treatment area,” explains corresponding author Jacqueline Andreozzi. “The hope is that this would allow for safer, verifiable radiation dose delivery consistent with the treatment plan generated for each patient, and provide a means of assessing the clinical impact when treatment does not go as planned.”

Illustration of Cherenkov dosimetry

A diverse patient population

Andreozzi, first author Savannah Decker and their colleagues examined 24 patients undergoing breast radiotherapy using 6 or 15 MV photon beams, or a combination of both energies.

During routine radiotherapy at Moffitt Cancer Center the researchers measured the Cherenkov emission from the tissue surface (roughly 5 mm deep) using a time-gated, intensified CMOS camera installed in the bunker ceiling. To minimize effects from skin reactions, they analysed the earliest fraction of each patient’s treatment.

Medical physicist Savannah Decker

Patients with darker skin exhibited up to five times lower Cherenkov emission than those with lighter skin for the same delivered dose – highlighting the significant impact of skin pigmentation on Cherenkov-based dose estimates.

To assess each patient’s skin tone, the team used standard colour photography to calculate the relative skin luminance as a metric for pigmentation. A colour camera module co-mounted with the Cherenkov imaging system simultaneously recorded an image of each patient during their radiation treatments. The room lighting was standardized across all patient sessions and the researchers only imaged skin regions directly facing the camera.

In addition to skin pigmentation, subsurface tissue properties can also affect the transmission of Cherenkov light. Different tissue types – such as dense fibroglandular or less dense adipose tissue – have differing optical densities. To compensate for this, the team used routine CT scans to establish an institution-specific CT calibration factor (independent of skin pigmentation) for the diverse patient dataset, using a process based on previous research by co-author Rachael Hachadorian.

Following CT calibration, the Cherenkov intensity per unit dose showed a linear relationship with relative skin luminance, for both 6 and 15 MV beams. Encouraged by this observed linearity, the researchers generated linear calibration factors based on each patient’s skin pigmentation, for application to the Cherenkov image data. They note that the calibration can be incorporated into existing clinical workflows without impacting patient care.

Improving the accuracy

To test the impact of their calibration factors, the researchers first plotted the mean uncalibrated Cherenkov intensity as a function of mean surface dose (based on the projected dose from the treatment planning software for the first 5 mm of tissue) for all patients. For 6 MV beams, this gave an R2 value (a measure of data variance from the linear fit) of 0.81. For 15 MV treatments, R2 was 0.17, indicating lower Cherenkov-to-dose linearity.

Applying the CT calibration to the diverse patient data did not improve the linearity. However, applying the pigmentation-based calibration had a significant impact, improving the R2 values to 0.91 and 0.64, for 6 and 15 MV beams, respectively. The highest Cherenkov-to-dose linearity was achieved after applying both calibration factors, which resulted in R2 values of 0.96 and 0.91 for 6 and 15 MV beams, respectively.

Using only the CT calibration, the average dose errors (the mean difference between the estimated and reference dose) were 38% and 62% for 6 and15 MV treatments, respectively. The pigmentation-based calibration reduced these errors to 21% and 6.6%.

“Integrating colour imaging to assess patients’ skin luminance can provide individualized calibration factors that significantly improve Cherenkov-to-dose estimations,” the researchers conclude. They emphasize that this calibration is institution-specific – different sites will need to derive a calibration algorithm corresponding to their specific cameras, room lighting and beam energies.

Bringing quantitative in vivo Cherenkov dosimetry into routine clinical use will require further research effort, says Andreozzi. “In Cherenkov dosimetry, the patient becomes their own dosimeter, read out by a specialized camera. In that respect, it comes with many challenges – we usually have standardized, calibrated detectors, and patients are in no way standardized or calibrated,” Andreozzi tells Physics World. “We have to characterize the superficial optical properties of each individual patient in order to translate what the cameras see into something close to radiation dose.”

Physicists take ‘snapshots’ of quantum gases in continuous space

Three teams of researchers in the US and France have independently developed a new technique to visualize the positions of atoms in real, continuous space, rather than at discrete sites on a lattice. By applying this method, the teams captured “snapshots” of weakly interacting bosons, non-interacting fermions and strongly interacting fermions and made in-situ measurements of the correlation functions that characterize these different quantum gases. Their work constitutes the first experimental measurements of these correlation functions in continuous space – a benchmark in the development of techniques for understanding fermionic and bosonic systems, as well as for studying strongly interacting systems.

Quantum many-body systems exhibit a rich and complex range of phenomena that cannot be described by the single-particle picture. Simulating such systems theoretically is thus rather difficult, as their degrees of freedom (and the corresponding size of their quantum Hilbert spaces) increase exponentially with the number of particles. Highly controllable quantum platforms like ultracold atoms in optical lattices are therefore useful tools for capturing and visualizing the physics of many-body phenomena.

The three research groups followed similar “recipes” in producing their atomic snapshots. First, they prepared a dilute quantum gas in an optical trap created by a lattice of laser beams. This lattice was configured such that the atoms experienced strong confinement in the vertical direction but moved freely in the xy-plane of the trap. Next, the researchers suddenly increased the strength of the lattice in the plane to “freeze” the atoms’ motion and project their positions onto a two-dimensional square lattice. Finally, they took snapshots of the atoms by detecting the fluorescence they produced when cooled with lasers. Importantly, the density of the gases was low enough that the separation between two atoms was larger than the spacing between the sites of the lattice, facilitating the measurement of correlations between atoms.

What does a Fermi gas look like in real space?

One of the three groups, led by Tarik Yefsah in Paris’ Kastler Brossel Laboratory (KBL), studied a non-interacting two-dimensional gas of fermionic lithium-6 (6Li) atoms. After confining a low-density cloud of these atoms in a two-dimensional optical lattice, Yefsah and colleagues registered their positions by applying a technique called Raman sideband laser cooling.

The KBL team’s experiment showed, for the first time, the shape of a parameter called the two-point correlator (g2) in continuous space. These measurements clearly demonstrated the existence of a “fermi hole”: at small interatomic distances, the value of this two-point correlator tends to zero, but as the distance increases, it tends to one. This behaviour was expected, since the Pauli exclusion principle makes it impossible for two fermions with the same quantum numbers to occupy the same position. However, the paper’s first author Tim de Jongh, who is now a postdoctoral researcher at the University of Colorado Boulder in the US, explains that being able to measure “the exact shape of the correlation function at the percent precision level” is new, and a distinguishing feature of their work.

The KBL team’s measurement also provides both two-body and three-body correlation functions for the atoms, making it possible to compare them directly. In principle, the technique could even be extended to correlations of arbitrarily high order.

What about a Bose gas?

Meanwhile, researchers directed by Wolfgang Ketterle of the Massachusetts Institute of Technology (MIT) developed and applied quantum gas microscopy to study how bosons bunch together. Unlike fermions, bosons do not obey the Pauli exclusion principle. In fact, if the temperature is low enough, they can enter a phase known as a Bose-Einstein condensate (BEC) in which their de Broglie wavelengths overlap and they occupy the same quantum state.

By confining a dilute bosonic gas of approximately 100 rubidium atoms in a sheet trap and cooling them to just above the critical temperature (Tc) for the onset of BEC, Ketterle and colleagues were able to make the first in situ measurement of the correlation length in a two-dimensional ultracold bosonic gas.  In contrast to Yefsah’s group, Ketterle and colleagues employed polarization cooling to detect the atoms’ positions. They also focused on a different correlation function; specifically, the second-order correlation function of bosonic bunching at T>Tc.

When the system’s temperature is high enough (54 nK above absolute zero, in this experiment), the correlation function is nearly 1, meaning that the atoms’ thermal de-Broglie waves are too short to “notice” each other. But when the sample is cooled to a lower temperature of 6.4 nK, the thermal de-Broglie wavelength becomes commensurate with the interparticle spacing r, and the correlation function exhibits the bunching behavior expected for bosons in this regime, decreasing from its maximum value at r = 0 down to 1 as the interparticle spacing increases.

In an ideal system, the maximum value of the correlation function would be 2. However, in this experiment, the spatial resolution of the grid and the quasi-two-dimensional nature of the trapped gas reduce the maximum to 1.3. Enid Cruz Colón, a PhD student in Ketterle’s group, explains that this experiment is sensitive to parity projection, meaning that the count number of atoms per site is either even or odd. This implies that doubly occupied sites are registered as empty sites, which directly shrinks the measured value of g2

What does an interacting quantum gas look like in real space?

With Yefsah and colleagues focusing on fermionic correlations, and Ketterle’s group focusing on bosons, a third team led by MIT’s Martin Zwierlein found its niche by studying mixtures of bosons and fermions. Specifically, the team measured the pair correlation function for a mixture of a thermal Bose gas composed of sodium-23 (23Na) atoms and a degenerate Fermi gas of 6Li. As expected, they found that the probability of finding two particles together is enhanced for bosons and diminished for fermions.

In a further experiment, Zwierlein and colleagues studied a strongly interacting Fermi gas and measured its density-density correlation function. By increasing the strength of the interactions, they caused the atoms in this gas to pair up, triggering a transition into the BCS (Bardeen-Cooper-Schriefer) regime associated with paired electrons in superconductors. For atoms in a BEC, the density-density correlation function shows a strong bunching tendency at short distances; in the BCS regime, in contrast, the correlation depicts a long-range pairing where atoms form so-called Cooper pairs as the strength of their interactions increases.

By applying the new quantum gas microscopy technique to the study of strongly interacting Fermi gases, Ruixiao Yao, a PhD student in Zwierlein’s group and the paper’s first author, notes that they have opened the door to applications in quantum simulation. Such strongly correlated systems, Yao highlights, are especially difficult to simulate on classical computers.

The three teams describe their work in separate papers in Physical Review Letters.

Diversity in UK astronomy and geophysics declining, finds survey

Women and ethnic-minority groups are still significantly underrepresented in UK astronomy and geophysics, with the fields becoming more white. That is according to the latest demographic survey conducted by the Royal Astronomical Society (RAS), which concludes that decades of initiatives to improve representation have “failed”.

Based on data collected in 2023, the survey reveals more people working in astronomy and solar-system science than ever before, although the geophysics community has shrunk since 2016. According to university admissions data acquired by the RAS, about 80% of students who started undergraduate astronomy and geophysics courses in 2022 were white, slightly less than the 83% overall proportion of white people in the UK.

However, among permanent astronomy and geophysics staff, 97% of British respondents to the RAS survey are white, up form 95% in 2016. The makeup of postgraduate students was similar, with 92% of British students – who accounted for 70% of postgraduate respondents – stating they are white, up from 87% in 2016.

The survey also finds that the proportion of women in professor, senior lecturer or reader roles increased from 2010 to 2023 in astronomy and solar-system science, but has stagnated at lecturer level in astronomy since 2016 and dropped in “solid Earth” geophysics to 19%. The picture is better at more junior levels, with women making up 28% of postdocs in astronomy and solar-system science and 34% in solid Earth geophysics.

A redouble of efforts

“I very much want to see far more women and people from minority ethnic groups working as astronomers and geophysicists, and we have to redouble our efforts to make that happen,” says Robert Massey, deputy executive director of the RAS, who co-authored the survey and presented its results at the National Astronomy Meeting 2025 in Durham last week.

RAS president Mike Lockwood agrees, stating that effective policies and strategies are now  needed. “One only has to look at the history of science and mathematics to understand that talent can, has, and does come from absolutely anywhere in society, and our concern is that astronomy and geophysics in the UK is missing out on some of the best natural talent available to us,” Lockwood adds.

CP violation in baryons is seen for the first time at CERN

The first experimental evidence of the breaking of charge–parity (CP) symmetry in baryons has been obtained by CERN’s LHCb Collaboration. The result is consistent with the Standard Model of particle physics and could lead to constraints on theoretical attempts to extend the Standard Model to explain the excess of matter over antimatter in the universe.

Current models of cosmology say that the Big Bang produced a giant burst of matter and antimatter, the vast majority of which recombined and annihilated shortly afterwards. Today however, the universe appears to be made almost exclusively of matter with very little antimatter in evidence. This excess of matter is not explained by the Standard Model and it existence is an important mystery in physics.

In 1964, James Cronin, Valentine Fitch and colleagues at Princeton University in the US conducted an experiment on the decay of neutral K mesons. This showed that the weak interaction violated CP symmetry, indicating that matter and antimatter could behave differently. Fitch and Cronin bagged the 1980 Nobel Prize for Physics and the Soviet physicist Andrei Sakharov subsequently suggested that, if amplified at very high mass scales in the early universe, CP violation could have induced the matter–antimatter asymmetry shortly after the Big Bang.

Numerous observations of CP violation have subsequently been made in other mesonic systems. The phenomenon is now an accepted part of the Standard Model is parametrized by the Cabibbo–Kobayashi–Maskawa (CKM) matrix. This describes the various probabilities of quarks of different generations changing into each other through the weak interaction – a process called mixing.

Tiny effect

However, the CP violation produced through the CKM mechanism is much smaller effect than would have been required to create the matter left over by the Big Bang, as Xueting Yang of China’s Peking University explains.

“The number of baryons remaining divided by the number of photons produced when the baryons and antibaryons met and produced two photons is required to be about 10-10 in Big Bang theory…whereas this kind of quantity is only 10-18 in the Standard Model prediction.”

What is more, CP violation had never been observed in baryons. “Theoretically the prediction for baryon decay is very imprecise,” says Yang, who is a member of the LHCb collaboration. “It’s much more difficult to calculate it than the meson decays because there’s some interaction with the strong force.” Baryons (mostly protons and neutrons) make up almost all the hadronic matter in the universe, so this left open the slight possibility that the explanation might lie in some inconsistency between baryonic CP violation and the Standard Model prediction.

In the new work, Yang and colleagues at LHCb looked at the decays of beauty (or bottom) baryons and antibaryons. These heavy cousins of neutrons contain an up quark, a down quark and a beauty quark and were produced in proton–proton collisions at the Large Hadron Collider in 2011–2018. These baryons and antibaryons can decay via multiple channels. In one, a baryon decays to a proton, a positive K-meson and a pair of pions – or, conversely, an antibaryon decays to an antiproton, a negative K-meson and a pair of pions. CP violation should create an asymmetry between these processes, and the researchers looked for evidence of this asymmetry in the numbers of particles detected at different energies from all the collisions.

Standard Model prevails

The team found that the CP violation seen was consistent with the Standard Model and inconsistent with zero by 5.2σ. “The experimental result is more precise than what we can get from theory,” says Yang. Other LHCb researchers scrutinized alternative decay channels of the beauty baryon: “Their measurement results are still consistent with CP symmetry…There should be CP violation also in their decay channels, but we don’t have enough statistics to claim that the deviation is more than 5σ.”

The current data do not rule any extensions to the current Standard Model out, says Yang, simply because none of those extensions make precise predictions about the overall degree of CP violation expected in baryons. However, the LHC is now in its third run, and the researchers hope to acquire information on, for example, the intermediate particles involved in the decay: “We may be able to provide some measurements that are more comparable for theories and which can provide some constraints on the Standard Model predictions for CP violation,” says Yang.

The research is described in a paper in Nature.

“It’s an important paper – an old type of CP violation in a new system,” says Tom Browder of  the University of Hawaii. “Theorists will try to interpret this within the context of the Standard Model, and there have already been some attempts, but there are some uncertainties due to the strong interaction that preclude making a precise test.” He says the results could nevertheless potentially help to constrain extensions of the Standard Model, such as CP violating decays involving dark matter proposed by the late Ann Nelson at the University of Washington in Seattle and her colleagues.

Oak Ridge’s Quantum Science Center takes a multidisciplinary approach to developing quantum materials and technologies

This episode of the Physics World Weekly podcast features Travis Humble, who is director of the Quantum Science Center at Oak Ridge National Laboratory.

Located in the US state of Tennessee, Oak Ridge is run by the US Department of Energy (DOE). The Quantum Science Center links Oak Ridge with other US national labs, universities and companies.

Humble explains how these collaborations ensure that Oak Ridge’s powerful facilities and instruments are used to create new quantum technologies. He also explains how the lab’s expertise in quantum and conventional computing is benefiting the academic and industrial communities.

Courtesy: American ElementsThis podcast is supported by American Elements, the world’s leading manufacturer of engineered and advanced materials. The company’s ability to scale laboratory breakthroughs to industrial production has contributed to many of the most significant technological advancements since 1990 – including LED lighting, smartphones, and electric vehicles.

Leprechauns on tombstones: your favourite physics metaphors revealed

Physics metaphors don’t work, or so I recently claimed. Metaphors always fail; they cut corners in reshaping our perception. But are certain physics metaphors defective simply because they cannot be experimentally confirmed? To illustrate this idea, I mentioned the famous metaphor for how the Higgs field gives particles mass, which is likened to fans mobbing – and slowing – celebrities as they walk across a room.

I know from actual experience that this is false. Having been within metres of filmmaker Spike Lee, composer Stephen Sondheim, and actors Mia Farrow and Denzel Washington, I’ve seen fans have many different reactions to the presence of nearby celebrities in motion. If the image were strictly true, I’d have to check which celebrities were about each morning to know what the hadronic mass would be that day.

I therefore invited Physics World readers to propose other potentially empirically defective physics metaphors, and received dozens of candidates. Technically, many are similes rather than metaphors, but most readers, and myself, use the two terms interchangeably. Some of these metaphors/similes were empirically confirmable and others not.

Shoes and socks

Michael Elliott, a retired physics lecturer from Oxford Polytechnic, mentioned a metaphor from Jakob Schwichtenberg’s book No-Nonsense Quantum Mechanics that used shoes and socks to explain the meaning of “commutation”. It makes no difference, Schwichtenberg wrote, if you put your left sock on first and then your right sock; in technical language the two operations are said to commute. However, it does make a difference which order you put your sock and shoe on.

“The ordering of the operations ‘putting shoes on’ and ‘putting socks on’ therefore matters,” Schwichtenberg had written, meaning that “the two operations do not commute.” Empirically verifiable, Elliott concluded triumphantly.

A metaphor that was used back in 1981 by CERN physicist John Bell in a paper addressed to colleagues requires more footgear and imagination. Bell’s friend and colleague Reinhold Bertlmann from the University of Vienna was a physicist who always wore mismatched socks, and in the essay “Bertlmann’s socks and the nature of reality” Bell explained the Einstein–Podolsky–Rosen (EPR) paradox and Bell’s theorem in terms of those socks.

If Bertlmann stepped into a room and an observer noticed that the sock on his first foot was pink, one could be sure the other was not-pink, illustrating the point of the EPR paper. Bell then suggested that, when put in the wash, pairs of socks and washing temperatures could behave analogously to particle pairs and magnet angles in a way that conveyed the significance of his theorem. Bell bolstered this conclusion with a scenario involving correlations between spikes of heart attacks in Lille and Lyon. I am fairly sure, however, that Bell never empirically tested this metaphor, and I wonder what the result would be.

Out in space, the favourite cosmology metaphor of astronomer and astrophysicist Michael Rowan-Robinson is the “standard candle” that’s used to describe astronomical objects of roughly fixed luminosity. Standard candles can be used to determine astronomical distances and are thus part of the “cosmological distance ladder” – Rowan-Robinson’s own metaphor – towards measuring the Hubble constant.

Retired computer programmer Ian Wadham, meanwhile, likes Einstein’s metaphor of being in a windowless spacecraft towed by an invisible being who gives the ship a constant acceleration. “It is impossible for you to tell whether you are standing in a gravitational field or being accelerated,” Wadham writes. Einstein used the metaphor effectively – even though, as an atheist, he was convinced that he would be unable to test it.

I was also intrigued by a comment from Dilwyn Jones, a consultant in materials science and engineering, who cited a metaphor from the 1939 book The Third Policeman by Irish novelist Flann O’Brien. Jones first came across O’Brien’s metaphor in Walter J Moore’s 1962 textbook Physical Chemistry. Atoms, says a character in O’Brien’s novel, are “never standing still or resting but spinning away and darting hither and thither and back again, all the time on the go”, adding that “they are as lively as twenty leprechauns doing a jig on top of a tombstone”.

But as Jones pointed out, that particular metaphor “can only be tested on the Emerald Isle”.

Often metaphors entertain as much as inform. Clare Byrne, who teaches at a high school in St Albans in the UK, tells her students that delocalized electrons are like stray dogs – “hanging around the atoms, but never belonging to any one in particular”. They could, however, she concedes “be easily persuaded to move fast in the direction of a nice steak”.

Giving metaphors legs

I ended my earlier column on metaphors by referring to poet Matthew Arnold’s fastidious correction of a description in his 1849 poem ”The Forsaken Merman”. After it was published, a friend pointed out to Arnold his mistaken use of the word “shuttle” rather than “spindle” when describing “the king of the sea’s forlorn wife at her spinning-wheel” as she lets the thing slip in her grief.

The next time the poem was published, Arnold went out of his way to correct this. Poets, evidently, find it imperative to be factual in metaphors, and I wondered, why shouldn’t scientists? The poet Kevin Pennington was outraged by my remark.

“Metaphors in poetry are not the same as metaphors used in science,” he insisted. “Science has one possible meaning for a metaphor. Poetry does not.” Poetic metaphors, he added are “modal”, having many possible interpretations at the same time – “kinda like particles can be in a superposition”.

I was dubious. “Superposition” suggests that poetic meanings are probabilistic, even arbitrary. But Arnold, I thought, was aiming at something specific when the king’s wife drops the spindle in “The Forsaken Merman”. After all, wouldn’t I be misreading the poem to imagine his wife thinking, “I’m having fun and in my excitement the thing slipped out of my hand!”

My Stony Brook colleague Elyse Graham, who is a professor of English, adapted a metaphor used by her former Yale professor Paul Fry. “A scientific image has four legs”, she said, “a poetic image three”. A scientific metaphor, in other words, is as stable as a four-legged table, structured to evoke a specific, shared understanding between author and reader.

A poetic metaphor, by contrast, is unstable, seeking to evoke a meaning that connects with the reader’s experiences and imagination, which can be different from the author’s within a certain domain of meaning. Graham pointed out, too, that the true metaphor in Arnold’s poem is not really the spinning wheel, the wife and the dropped spindle but the entirety of the poem itself, which is what Arnold used to evoke meaning in the reader.

That’s also the case with O’Brien’s atom-leprechaun metaphor. It shows up in the novel not to educate the reader about atomic theory but to invite a certain impression of the worldview of the science-happy character who speaks it.

The critical point

In his 2024 book Waves in an Impossible Sea: How Everyday Life Emerges from the Cosmic Ocean, physicist Matt Strassler coined the term “physics fib” or ”phib”. It refers to an attempted “short, memorable tale” that a physicist tells an interested non-physicist that amounts to “a compromise between giving no answer at all and giving a correct but incomprehensible one”.

The criterion for whether a metaphor succeeds or fails does not depend on whether it can pass empirical test, but on the interaction between speaker or author and audience; how much the former has to compromise depends on the audience’s interest and understanding of the subject. Metaphors are interactions. Byrne was addressing high-school students; Schwichtenberg was aiming at interested non-physicists; Bell was speaking to physics experts. Their effectiveness, to use one final metaphor, does not depend on empirical grounding but impedance matching; that is, they step down the “load” so that the “signal” will not be lost.

How to keep the second law of thermodynamics from limiting clock precision

The second law of thermodynamics demands that if we want to make a clock more precise – thereby reducing the disorder, or entropy, in the system – we must add energy to it. Any increase in energy, however, necessarily increases the amount of waste heat the clock dissipates to its surroundings. Hence, the more precise the clock, the more the entropy of the universe increases – and the tighter the ultimate limits on the clock’s precision become.

This constraint might sound unavoidable – but is it? According to physicists at TU Wien in Austria, Chalmers University of Technology, Sweden, and the University of Malta, it is in fact possible to turn this seemingly inevitable consequence on its head for certain carefully designed quantum systems. The result: an exponential increase in clock accuracy without a corresponding increase in energy.

Solving a timekeeping conundrum

Accurate timekeeping is of great practical importance in areas ranging from navigation to communication and computation. Recent technological advancements have brought clocks to astonishing levels of precision. However, theorist Florian Meier of TU Wien notes that these gains have come at a cost.

“It turns out that the more precisely one wants to keep time, the more energy the clock requires to run to suppress thermal noise and other fluctuations that negatively affect the clock,” says Meier, who co-led the new study with his TU Wien colleague Marcus Huber and a Chalmers experimentalist, Simone Gasparinetti. “In many classical examples, the clock’s precision is linearly related to the energy the clock dissipates, meaning a clock twice as accurate would produce twice the (entropy) dissipation.”

Clock’s precision can grow exponentially faster than the entropy

The key to circumventing this constraint, Meier continues, lies in one of the knottiest aspects of quantum theory: the role of observation. For a clock to tell the time, he explains, its ticks must be continually observed. It is this observation process that causes the increase in entropy. Logically, therefore, making fewer observations ought to reduce the degree of increase – and that’s exactly what the team showed.

“In our new work, we found that with quantum systems, if designed in the right way, this dissipation can be circumvented, ultimately allowing exponentially higher clock precision with the same dissipation,” Meier says. “We developed a model that, instead of using a classical clock hand to show the time, makes use of a quantum particle coherently travelling around a ring structure without being observed. Only once it completes a full revolution around the ring is the particle measured, creating an observable ‘tick’ of the clock.”

The clock’s precision can thus be improved by letting the particle travel through a longer ring, Meier adds. “This would not create more entropy because the particle is still only measured once every cycle,” he tells Physics World. “The mathematics here is of course much more involved, but what emerges is that, in the quantum case, the clock’s precision can grow exponentially faster than the entropy. In the classical analogue, in contrast, this relationship is linear.”

“Within reach of our technology”

Although such a clock has not yet been realized in the laboratory, Gasparinetti says it could be made by arranging many superconducting quantum bits in a line.

“My group is an experimental group that studies superconducting circuits, and we have been working towards implementing autonomous quantum clocks in our platform,” he says. “We have expertise in all the building blocks that are needed to build the type of clock proposed in in this work: generating quasithermal fields in microwave waveguides and coupling them to superconducting qubits; detecting single microwave photons (the clock ‘ticks’); and building arrays of superconducting resonators that could be used to form the ‘ring’ that gives the proposed clock its exponential boost.”

While Gasparinetti acknowledges that demonstrating this advantage experimentally will be a challenge, he isn’t daunted. “We believe it is within reach of our technology,” he says.

Solving a future problem

At present, dissipation is not the main limiting factor for when it comes to the performance of state-of-the-art clocks. As clock technology continues to advance, however, Meier says we are approaching a point where dissipation could become more significant. “A useful analogy here is in classical computing,” he explains. “For many years, heat dissipation was considered negligible, but in today’s data centres that process vast amounts of information, dissipation has become a major practical concern.

“In a similar way, we anticipate that for certain applications of high-precision clocks, dissipation will eventually impose limits,” he adds. “Our clock highlights some fundamental physical principles that can help minimize such dissipation when that time comes.”

The clock design is detailed in Nature Physics.

Copyright © 2025 by IOP Publishing Ltd and individual contributors