Skip to main content

A new generation takes on the cosmological constant

The cosmological constant has been a thorn in the side of physicists for decades. Even though its purpose in modern cosmology differs from its original role, the constant – commonly represented by Λ – still presents a challenge for models designed to explain the expansion of the universe.

Simply put, Λ describes the energy density of empty space. One of the main issues stems from the fact that Λ’s theoretical value, obtained through quantum field theory (QFT), is nowhere near the value obtained from the study of type Ia supernovae and the cosmic microwave background radiation (CMB) – in fact it diverges by as much as 10121. It is therefore of little wonder that cosmologists are eager to tackle this disparity.

Dark Energy Survey map

“The cosmological constant problem, in one form or another, is a century-old puzzle. It is one of the biggest problems in modern physics,” says theoretical physicist Lucas Lombriser from the University of Geneva (UNIGE), Switzerland. “Moreover, the cosmological constant is the most dominant component in our universe. It makes up 70% of the current energy budget. How could one not want to figure out what it really is?”

Indeed, with a new generation of cosmologists now on the scene, there are some rather radical ideas and revisions of older theories. But can the field accept these revolutionary ideas, or has Λ become a comfortably familiar burden?

Still crazy after all these years

The cosmological constant was first introduced to models of the universe by Albert Einstein in 1917. To the physicist’s own surprise, his general theory of relativity (GR) seemed to suggest that the universe is contracting, thanks to the effects of gravity. The consensus at the time was that the universe is static and, despite having already revolutionized several long-held ideas, Einstein was unwilling to challenge this particular paradigm. This desire to preserve the stability of the universe led Einstein to make an addition to GR’s equations. Later, he would infamously describe this as his “biggest blunder”.

“When Einstein was applying GR to cosmology, he realized he could add a constant to his equations and they would still be valid,” explains Peter Garnavich, a cosmologist at the University of Notre Dame, France. “This ‘cosmological constant’ could be viewed in two equivalent ways: as a curvature of space–time that was just a natural aspect of the universe; or as a fixed energy density throughout the universe.”

Thus, the initial role of Λ was to counterbalance the effects of gravity and help ensure a steady-state universe that is neither expanding nor contracting. This role, however, became obsolete following Edwin Hubble’s discovery in 1929 that the universe is expanding. When Einstein was finally convinced of this, Λ was designated to the cosmic dustbin. Yet, like the proverbial bad penny, it would resurface in a different form decades later.

Whereas once the cosmological constant was used to balance the universe against expansion, in modern cosmology Λ represents vacuum energy – the inherent energy density of empty space – that no longer just balances gravity , but overwhelms it. That doesn’t mean Λ has become any less problematic, though. “In 1998 the High-Z Supernova Search team discovered that the expansion rate was accelerating instead of decelerating,” says Garnavich, who took part in the research using type Ia supernovae to study the expansion of the universe. This requires some form of additional energy throughout the universe or some more exotic explanation. This driving force is referred to as “dark energy”, and the term itself has become a placeholder for the various theoretical entities that could account for this accelerating expansion. Suspects range from vacuum energy, the current most favoured model; to quantum fields; and even fields of time-travelling tachyons – hypothetical particles that travel at faster-than-light speeds.

The cosmological constant serves as the simplest possible explanation for the dark energy that drives this accelerating expansion, and its theoretical value should therefore match observations. Unfortunately, as mentioned, the former is greater than the latter by some 120 orders of magnitude. Clearly, Λ’s reputation as “the worst prediction in the history of physics” isn’t mere hyperbole.

Getting a head start on the problem

The role of dark energy in the early universe has been on the mind of Luz Ángela García, a physicist and astronomer at Universidad ECCI, Bogotá, Colombia. Together with her collaborators Leonardo Castañeda and Juan Manuel Tejeiro from Observatorio Astronómico Nacional, Universidad Nacional de Colombia, García is putting forward an “early dark energy” (EDE) model as a potential solution to the cosmological constant problem (New Astronomy 84 101503).

The radical element about the team’s proposal is the idea that cosmological models might not need the cosmological constant at all. Of course, there is still that accelerating expansion to consider, so to account for this, García looks to other sources. “When I first approached this field, I came across the inconsistency with the values predicted from both cosmology and high-energy physics, and tried to formulate an alternative model to Λ by studying possible candidates to explain the accelerated expansion of the universe,” she says.

Λ, as it is currently considered, only accounts for the universe’s expansion once matter began to form structure – an era that lasted from 47,000 years to 9.8 billion years after the Big Bang. García wanted to consider a form of dark energy that began to play a role in the earlier, “radiation-dominated” epoch, from the earliest moments of “cosmic inflation”. Inflation – the sudden and very rapid expansion of the early universe – is thought to have taken place some 10−36 seconds after the Big Bang, but this rapid expansion is thought to have been driven by quantum fluctuations, and not dark energy. Eventually, the attractive force of gravity slowed this expansion, until about 9.8 billion years into the universe’s history, when dark energy began accelerating its expansion once again (figure 1). García and colleagues, however, describe this dark energy as an entity that could have been present in both the radiation-dominated and matter-dominated epochs as a “non-interacting perfect fluid” that evolved with the universe’s other components.

figure 1

“The model’s strengths are the following: first, it provides a compelling description of the universe’s accelerated expansion during its current epoch, beginning around four billion years ago,” García explains. “Second, our formulation allows for evolution with redshift instead of the cosmological constant, in which energy density does not change over time.” This could explain why the theoretical value suggested by QFT is larger than the value given by the redshifts of distant supernovae. The value has evolved over time.

García identifies a further strength of her EDE model, which is that it offers several predictions that match up well with practical measurements and high-resolution data concerning various stages of the universe’s evolution. The result is a theoretical picture that matches the ratio we observe in the current dark-energy-dominated epoch of our universe, where its matter/energy content is dominated by the accelerating force. “Of course, we could use both the cosmological constant and our EDE, but it makes the description unnecessarily complicated, and there is not a physical justification for that,” says García. “We only need one component to describe the accelerated expansion of the universe today.”

If the decision to eliminate the cosmological constant, or set it to zero, taken by García and her collaborators seems somewhat arbitrary, she points out that there is almost an “arbitrariness” inherent to the introduction of the constant in the first place. “There is no fundamental reason to take for granted that dark energy has to manifest as the cosmological constant,” she remarks. “We have not detected any form of dark energy nor the cosmological constant; therefore, any form of dark energy is valid until the data confirm or refute its existence.”

The EDE that García suggests isn’t perfect. Indeed, it comes with elements that the wider scientific community may be reluctant to adopt. But she doesn’t shy away from pointing out the potential flaws in her own ideas. “There are two issues that the community could find troubling,”García admits. “On the one hand, more complex models imply a broader set of free parameters. It is not something we desire for our formulations, because those parameters might not have a direct physical interpretation. In that sense, the cosmological constant is an advantageous model, because it has a minimal number of free parameters, all of them constrained with current observations.”

We have been revising and looking for more sets of observational data to validate our models. Hence, we are creating a bridge between theory and observational cosmology

The second thing that García admits may cause some caution is that the model has yet to be submitted to many observational probes. “We have been revising and looking for more sets of observational data to validate our models. Hence, we are creating a bridge between theory and observational cosmology.”

The “well-tempered” cosmological constant

Forcing the cosmological constant to take a value of zero may lead the curious cosmologist to consider what happens if we do the opposite. In other words, what would happen if we allow it to take an arbitrarily large value, similar to the value purported by QFT.

Stephen Appleby, a cosmologist at the Asia Pacific Center for Theoretical Physics in Pohang, Republic of Korea, takes this approach to tackle the problem. He starts by assuming that the prediction given by QFT is correct, allowing Λ to take on the immensely large value it predicts (Journal of Cosmology and Astroparticle Physics 2018 034). “Using modern cosmological observations from type Ia supernovae and the CMB, we can measure the total energy density of the universe, including the vacuum energy,” Appleby explains. “The value obtained from these measurements is tiny compared to particle-physics scales.”

This is because, according to QFT, every particle in the universe should contribute to vacuum energy, thus exerting a negative pressure that is driving the expansion of the universe. The problem is that given the estimated number of particles in the universe, as well as the virtual particle pairs that pop in and out of existence in empty space, vacuum energy should be accelerating the expansion much faster than astronomers see in the redshifts of supernovae (figure 2).

figure 2

QFT says the value of this contribution is given by the mass of the particles, which are well known, meaning there isn’t a problem with this aspect of QFT. As an example of this radical difference between the contribution to dark energy and the cosmological constant that QFT says particles should make, and the value that we actually observe, Appleby cites the electron and Higgs boson. Based on their masses, the contributions made solely by these particles to the vacuum energy of the universe should be roughly 40–60 orders of magnitude greater than our astronomical measurements suggest.

Assuming that the value provided by QFT is correct, Appleby and his collaborator Eric Linder from the University of California, Berkeley, have to explain why the observed value is so diminutive. They do this by refining the idea of gravity itself. “We asked the question: can we construct a theory of gravity that possesses low energy vacuum states, via lower particle contributions, despite the large cosmological constant?” explains Appleby. “Our analysis shows that such a theory can be constructed, but only by introducing additional gravitational fields to models of the universe.”

Appleby and Linder have constructed a general class of gravitational models, which suggests that vacuum energy is present, but doesn’t affect the curvature of space–time. This results in a space–time that looks like our low-energy universe, not one with the huge vacuum energy of QFT. “We pick out particular gravity models with the behaviour that we are searching for,” he continues. “Vacuum energy is present in our approach, but it does not affect the curvature of space–time. It does gravitate, but its effect is purely felt by the new gravitational field that we have introduced. In this approach, the cosmological constant problem becomes moot because it can take any value, but its effect is not felt directly.”

The strength of the model –  which the duo label as “the well-tempered cosmological constant” –  is that no energy scales have to be fine-tuned within it. As the vacuum energy in their models doesn’t impact the curvature of space–time, the individual contributions of particles would not influence the redshift of supernovae, thereby doing away with the observational disparity. Therefore, the vacuum energy in their model can be whatever value that QFT and particle physics predicts, without conflicting with observed values from astronomy. This energy can even change due to a phase transition.

Despite this utility, Appleby, like García, accepts that the model he and Linder proposed isn’t perfect and needs to be refined. “The main issue with our work is that we have to introduce new gravitational fields, which have not yet been observed, and the kinetic energy and potential of these additional fields must take a very particular form,” he says. “It is an open question whether such a field can be embedded in some more fundamental quantum gravity model.”

Appleby also points out that his model requires a revision of GR, which is a hugely successful theory of gravity. Indeed, GR is supported by a wealth of experimental evidence both here on Earth and beyond the limits of the Milky Way. “When you modify gravity in some way, you have to show that this new theory can also pass the same stringent observational tests that GR has,” Appleby concedes. “This is a difficult hurdle for any gravity model to overcome, and we must perform these checks in the future.”

Tuning in to the cosmological constant problem

Seeking to adjust theories of gravity to account for the cosmological constant problem is also an approach that has been considered by Lombriser over in Geneva. “My research in this area started out with investigating modifications to Einstein’s theory of GR as an alternative driver of the late-time accelerated expansion of our cosmos to the cosmological constant,” explains Lombriser. “In 2015 I realized that for modifications of the theory of gravity to be the direct cause of cosmic acceleration, and not violate cosmological observations, the speed of gravitational waves would have to differ from the speed of light. That did not sound right, and I started to focus on different explanations.”

Lombriser has begun to explore the idea that while modifications to GR or scalar energy fields may not be responsible for directly causing the late-time acceleration, they could instead “tune” the cosmological constant to do so. “I was surprised that I did not even have to modify Einstein’s equations to solve the problem,” says Lombriser. “I simply had to perform an additional variation with respect to a quantity that already appears in the equations – the Planck mass, which represents the strength of the gravitational coupling.”

gravity and dark energy illustration

The variation results in an additional equation, one which constrains Λ to the volume of space–time in the observable universe (Phys. Lett. B 797 134804). It also explains why vacuum energy can’t freely gravitate. Lombriser adds that by evaluating this constraint equation with some minimal assumptions about our place in the cosmic history, he and his colleagues can estimate the value that Λ occupies in our current cosmic energy budget. They have found this to be 70% in agreement with the dark energy contributions suggested by observations.

“The model solves both the old and new aspects of the cosmological constant problem,” Lombriser explains. “The old problem of the gravitating vacuum energy and the new problem of the cosmic acceleration with a small cosmological constant, results in this strange coincidence of us happening to live at a time where the energy density is comparable to that of the cosmological constant. A clear strength of the model is its simplicity.”

Lombriser also accepts there are elements to the solution that he puts forward that are flawed or need refinement. In particular, he points to the fact that, due to its similarity to standard theory, the model he suggests may be impossible to falsify. “I think the way forward here is to see whether this new approach can be extended to naturally explain other poorly understood phenomena, such as producing a natural inflationary phase in the early universe,” he says. “Or we can investigate how the self-tuning mechanism appears from fundamental theory interactions. These could give rise to yet unknown phenomena that may be testable in the laboratory.”

The “vanilla” appeal of the cosmological constant

Of course, the three ideas discussed here could prove to all be theoretical dead-ends – a leap too far for researchers who have become accustomed to the mystery of the cosmological constant.

Indeed, Λ could remain a problem for descriptions of the universe and its expansion for decades to come. “This cosmological constant is like vanilla ice cream, it is very good, but kind of boring,” Garnavich concludes. “Removing it will make the house fall down unless there is a better theory to replace it.”

This will likely result in more exciting “flavours” of ideas, theories and models until a satisfactory explanation for the cosmological constant problem is found. When it comes to cosmology and science in general, there is definitely a benefit to the approach of “nothing ventured, nothing gained”. As Einstein himself perfectly captured this ethos: “A person who never made a mistake never tried anything new.”

Squeezed light boosts the search for dark matter axions

The search for axions, a hypothetical type of dark matter, could become far more efficient thanks to the emerging technique of light squeezing, according to researchers in the US. Kelly Backes and colleagues incorporated the technique into Yale University’s HAYSTAC axion experiment; halving the time needed to analyse their data. Their results highlight the promising potential for the widespread adoption of light squeezing by axion experiments worldwide.

Axions are predicted to be chargeless, much lighter than electrons, and produced in abundance after the Big Bang. This makes them a popular candidate for dark matter, a mysterious substance that appears to permeate the universe and affect the gravitational properties of large objects such as galaxies.

Recently, several experiments have tried to detect axions using strong magnetic fields, produced in the lab at cryogenic temperatures. The idea is that axions will scatter from quantum fluctuations in these fields, producing photons whose frequencies are proportional to axion masses. However, the signals produced by these photons are expected to be very weak and require extremely low levels of noise to detect. Recently, this has presented researchers with a fundamental limit to the accuracy of their measurements.

Quantum needle in HAYSTAC

The problem stems from Heisenberg’s uncertainty principle, which describes an inescapable trade-off in accuracy when measuring the positions and momenta of quantum particles such as photons. This “quantum noise” presents a significant barrier to experiments aiming to verify existing dark matter theories, which encompass a wide range of potential axion masses. Ultimately, identifying individual photons, whose frequencies must deviate from quantum fluctuations to highly specific degrees, is like finding a single quantum needle in an enormous haystack.

In their study, the team based at Yale, the University of Colorado, NIST and the University of California Berkeley pushed the HAYSTAC experiment to reach beyond this limit using the latest advances in light squeezing technology. This technique works by reducing the uncertainty of one component (position or momentum) to beyond the usual quantum limit at the expense of increasing the uncertainty in the other component. With the right approach, further knowledge can be gained about the component of interest without losing too much information about the other component.

The researchers tested this principle by using HAYSTAC to search for axions with masses predicted by two particular theories. Previously, analysis of the resulting data would have taken around 200 days to complete. In this case, however, light squeezing enabled Backes and colleagues to handle the entire dataset in just 100 days – unfortunately, finding no clear evidence for axions.

This improvement was relatively modest, but the researchers note that light squeezing technology is still in its early stages. Through further improvements, experimental tests of axion theories could become far more efficient still. Backes’ team now hope that their manipulation of the uncertainty principle will soon be extended to a wide variety of experiments like HAYSTAC; potentially bringing a long-awaited explanation of dark matter a step closer to reality.

The research is described in Nature.

Archaeomagnetism, DNA materials and cosmic questions: the March 2021 issue of Physics World is now out

Physics World March 2021 cover

The idea that a record of the Earth’s magnetic past
might be stored in objects made from fired clay dates back to the 16th century when William Gilbert, physician to Queen Elizabeth I, wondered if the Earth is a giant bar magnet and that clay bricks possess a magnetic memory.

He was right and Gilbert’s far-sighted notion now forms the basis of a well-established method for dating archaeological sites that contain kilns, hearths, ovens or furnaces.

Known as “archaeomagnetism”, this field of research is helping geophysicists gain insights into local changes in the Earth’s magnetic field over the past 3000 years, and – as Rachel Brazil explains in the March 2021 issue of Physics World –  how it might change in future.

If you’re a member of the Institute of Physics, you can read the whole of Physics World magazine every month via our digital apps for iOSAndroid and Web browsers. Let us know what you think about the issue on TwitterFacebook or by e-mailing us at pwld@ioppublishing.org.

For the record, here’s a run-down of what else is in the issue.

• China detector hints at new physics – The PandaX-II dark-matter experiment has confirmed previous signs of exotic particles but further evidence will be needed, as Edwin Cartlidge reports

• UAE Hope probe reaches Mars orbit – The United Arab Emirates has become the first Arab country to reach another planet, a feat that it hopes will turbocharge its science base, as James Dacey reports

• Concerns raised as Oxford renames physics chair – The University of Oxford has announced that its Wykeham chair of physics will be renamed after the giant Chinese technology corporation Tencent, which denies claims it has links with the Chinese security services. Michael Allen reports

• Widening career aspirations – As children narrow down their career interests from an early age, Carol Davenport says it is important that they are brought up with a positive attitude towards science

• Supporting science in difficult times – With COVID-19 fostering anti-science conspiracies, Caitlin Duffy says that scientists have a duty to speak up and challenge misinformation

• Grounds for optimism – The solution to climate change could be lying beneath our feet. James McKenzie examines the potential of pumps that warm our homes and offices by extracting heat from the ground below

• Beneath the rotunda – Robert P Crease reflects on the US Capitol’s invasion from a unique perspective

• Digging up magnetic clues – Analysing magnetic information stored in ancient artefacts is revealing the recent history of the Earth’s magnetic field and providing clues to the changes we might expect in the future. Rachel Brazil explains

• A new generation takes on the cosmological constant – The long-standing problem of the cosmological constant, described both as “the worst prediction in the history of physics” and by Einstein as his “biggest blunder”, is being tackled with renewed vigour by today’s cosmologists. Rob Lea investigates

• Make or break: building soft materials with DNA – DNA molecules are not fixed objects, they are constantly getting broken up and glued back together to adopt new shapes. Davide Michieletto explains how this process can be harnessed to create a new generation of “topologically active” material

• Hunt for the superheavies – Hamish Johnston reviews Superheavy: Making and Breaking the Periodic Table by Kit Chapman

• Strolling in the deep – Ian Randall reviews The Brilliant Abyss: True Tales of Exploring the Deep Sea, Discovering Hidden Life and Selling the Seabed by Helen Scales

• Rethinking nuclear for a greener planet – Troels Schönfeldt, co-founder and chief executive of Danish start-up Seaborg Technologies, talks to Julianna Photopoulos about his career in nuclear and particle physics – and how he unintentionally became an “impact entrepreneur”

• Fine structure and black holes – Sidney Perkowitz pays tribute to new research on these astronomical marvels

Fine structure and black holes

Black holes remain a fascinating idea in popular physics while inspiring high-level research. Indeed, the 2020 Nobel Prize for Physics honoured theoretical work on black holes carried out by Roger Penrose and observational results obtained by Andrea Ghez and Reinhard Genzel. In the 1990s, both Ghez and Genzel independently analysed the motion of stars near our galactic centre, some 27,000 light-years away from Earth. They concluded that a supermassive black hole (SMBH) resides there and holds 4 million times the mass of our Sun. Apart from finding unambiguous evidence of its existence, the discovery carries a bonus – the black hole’s extreme gravitational effects provide a new way for physicists to explore α, the fine structure constant.

Physical theories rely on essential constants such as ceħ and the gravitational constant G, but some physicists argue that unitless constants are more fundamental because they are invariant in any system of measurement. The fine structure constant α – defined as e2/(4πε0ħc) where ε0 is the permittivity of free space – is one such pure number, equal to 0.0072973525693. It appeared in 1916 when Arnold Sommerfeld added relativity to quantum mechanics and calculated a better agreement with the observed fine features of the hydrogen spectrum. α showed up again in the 1920s in Paul Dirac’s relativistic quantum ideas, and in astronomer Arthur Eddington’s theory of the universe. Eddington predicted that 1/α would be an integer but failed to make a convincing case (the latest value 137.036 is only nearly a whole number).

Instead, α acquired deeper meaning within quantum electrodynamics (QED), the theory that won Richard Feynman and two others a Nobel prize in 1965. Now α is understood as determining how strongly electrons and photons couple. It is a key to the electromagnetic force, which – along with gravity, and the strong and weak nuclear forces – controls the universe. Within multiverse models proposing that our particular universe is especially tuned to support life, α may be additionally significant because a small change in its value would affect the conditions for life to form.

Within multiverse models proposing that our particular universe is especially tuned to support life, α may be additionally significant because a small change in its value would affect the conditions for life to form

In 1937 Dirac asked if the “constants” are really constant when he speculated that α and G have changed as the universe has aged. Such changes in the constants of nature could alter the Standard Model of particle physics, and general relativity, as well as modify our understanding of the history of the universe. Following Dirac’s suggestion, various researchers have searched for changes in the constants, especially c and α.

Since 1999 astrophysicist John Webb at the University of New South Wales, Australia, has sought changes in α over cosmic time. He examined light from astronomically distant sources after it traversed interstellar dust clouds, which imprints on the light spectral absorption lines from the atoms in the clouds. Analysing these wavelengths gives the value of α at the remote location and therefore in a younger universe, as determined by the time lag due to the finite speed of light. Webb’s early data showed an extremely small increase over the last 6 billion years. But in 2020 he interpreted new results from 13 billion years ago, when the universe was only 0.8 billion years old, as “consistent with no temporal change”. Webb however obtained a bigger change, 4 × 10–5 relative to the value on Earth, in measurements made in the strong gravity around a white dwarf star.

There are theoretical reasons why α should depend on gravity. Also last year, general relativity theorist Aurélien Hees of the Paris Observatory, along with 13 international co-authors including Ghez, used her data to measure the effect of the black hole’s gravity on α (Phys. Rev. Lett. 124 081101).

This is the first measurement of α near an SMBH, and the work shows that this approach can more fully examine the connection between α and gravity. Ghez established the presence of the SMBH by plotting the observed paths of stars that orbit the galactic centre. These paths occurred within the gravitational field from the presumed black hole but were distant enough to form ellipses according to Newtonian mechanics; general relativity was not required. Then a comparatively straightforward analysis yielded the value of 4 million solar masses at an elliptical focus that held the stars in orbit.

To measure α at high gravity, the researchers chose five stars that came near the SMBH, and also have stellar atmospheres with strong spectral absorption lines. Then wavelength analysis gave the value of α at those locales, with small measured deviations of 1 × 10–5 or less from the Earthly value. Still, the data already yield new insight by supporting the prediction that the change in α is proportional to the gravitational potential. However, the measurement uncertainties are too large to yield a definitive value for the proportionality constant, which according to Hees et al. would help distinguish among different theories that incorporate dark matter and dark energy.

Hees now wants to observe stars that are closer to the black hole as they experienced a stronger gravitational potential. The spectral analysis will be harder, but Hees reckons he can reduce the measurement errors 10-fold and has requested new telescope time to do so. We should be optimistic that further improvement will bring new knowledge about α and the universe.

Hunt for the superheavies

Atomic nuclei have been intensely researched for more than a century, but they remain things of mystery and wonder – especially to the nuclear physicists who study them. We know that nuclei are made of protons and neutrons bound together by the residual strong force. But the extreme difficulty of calculating nuclear properties using the Standard Model of particle physics leaves much to be learned about their internal workings. In a sense, nuclei are like the world’s oceans: despite their ubiquity, we are still on the shoreline trying to understand what lies in their depths.

Nuclei are made of just two components, but their properties can be very different indeed. Most of the nuclei in your body, for example, have been around for billions of years, yet some rare nuclei made in the lab can last just tiny fractions of a second before decaying. It is the heaviest of these rare nuclei, and the people who devoted their careers to discovering and characterizing them before they decay, that are the subject of Superheavy: Making and Breaking the Periodic Table by the pharmacist turned science writer Kit Chapman.

The book takes the reader on a romp that begins in 1930s Paris, when Irène and Frédéric Joliot-Curie discovered that heavier elements could be made by bombarding lighter elements with alpha particles (helium nuclei). This was followed shortly thereafter in Rome by Enrico Fermi and the “Via Panisperna Boys” who found that bombardment with neutrons had a similar effect.

The race was on to find new heavy elements and the result was a transformation of the periodic table – which is conveniently included in the frontmatter of Chapman’s book. Like many physicists, the last time that I had a serious look at a periodic table was when I took my last chemistry course – which was 35 years ago – and I rather sheepishly admit that studying the table once more was a revelation. Indeed, I wondered aloud “Where did all those new superheavy elements come from?” Even though as a physics journalist I have covered the twists and turns in the discovery and naming of new elements over the past two decades, I had always looked at each element in isolation and did not fully appreciate how the periodic table – full of holes when I was young – appears much more complete, at least for now.

What I mean by complete is that in the current incarnation of the familiar version of the table, the seventh and final row is full of elements named after people and places – there are no gaps and no systematic names such as unnilseptium that were placeholders as scientists argued over element names. The seventh row begins on the left with francium and radium; is punctuated by the 14 actinoids (from actinium to lawrencium); and then makes the sprint across the transition metals and on towards the noble gases.

The superheavy elements are the final 15 in this row, from rutherfordium with 104 protons, on to dubnium and eventually to oganesson with 118. Those last two names, by the way, reflect the importance of the Soviet/Russian Joint Institute for Nuclear Research (JINR) in the search for superheavy elements. The lab is in Dubna, near Moscow and since 1989 it has been run by Yuri Oganessian. Strictly speaking, new elements should not be named after living people, but two exceptions have been made – oganesson (118) and seaborgium (106), the latter honouring the nuclear chemist Glenn Seaborg, who created and ran the rare elements programme at the University of California, Berkeley.

GSI in Darmstadt, Germany, and RIKEN’s Radioactive Isotope Physics Laboratory in Japan were also major contributors to the discovery of the superheavy elements. They have been honoured with the names darmstadtium (110), hassium (108) and nihonium (113) – the last two inspired by the Latin name for the German state of Hesse and an alternative name for Japan.

Like many things in modern physics, the drive to create superheavy elements began in earnest during the Second World War with the race to build the atomic bomb – and specifically the development of a way to produce significant amounts of plutonium. That element was discovered in 1941 at the University of California Berkeley by a team that included three future Nobel laureates: Seaborg, Emilio Segrè and Edwin McMillan.

Chapman reveals that Seaborg chose the symbol Pu for plutonium because of the stench of his Berkeley chemistry lab. Although physicists had played an important role in the early discovery of new elements – the work of Seaborg and colleagues was made possible by the cyclotron, which was invented at Berkeley by the physicist and Nobel laureate Ernest Lawrence – it was chemists who isolated the new elements from bombarded targets. This was no mean feat; not only did they have to predict the chemistry of an element that had never been seen before, they also had to work very quickly because the elements have short half-lives. Indeed, Chapman tells us that Berkeley nuclear scientist Albert Ghiorso famously used a souped-up Volkswagen Beetle to transport samples in the shortest time possible across the campus, from where they were made to where they were analysed.

Because much of the early effort to create new elements occurred during the Second World War and the Cold War, there was a certain amount of censorship involved in publication of the work. Before the US entered the war in 1941, Chapman points out that the British were concerned that American scientists were providing the Germans with information that could be used to create nuclear weapons. In 1945 US officials prevented the publication of a Superman comic strip because the superhero was irradiated in a cyclotron – which was described with too much accurate detail for wartime censors.   

While Seaborg is the scientist most associated with the discovery of new elements, it is Ghiorso who holds the record for being involved in the most discoveries. In 1993 he helped discover element 106, putting his tally at 11 and beating the 185-year record held by Humphrey Davy. Because the discovery was made at Berkeley, the lab gained the right to name the element. This was at a time when Berkeley, JINR and Darmstadt were in competition to find and name new elements.

The days of isolating new elements and studying their chemistry was waning. By the 1990s researchers often only caught fleeting glances of new elements and had to try to determine their decay chains – often only seeing part of the picture. Science is usually done incrementally with different labs contributing evidence that eventually adds up to a discovery – giving priority as to who made a discovery and who therefore had naming rights was a tricky business.

While this competition between labs resulted in a flurry of new elements, the labs were at loggerheads when it came to naming the new elements. This ruckus was dubbed the “Transfermium Wars”, with transfermium referring to elements beyond fermium (100). The wars ran for about 30 years, starting in the 1960s, and during this period three different elements had been named rutherfordium by different research groups and three different names had been proposed for element 102. What is more, two different names had been proposed to honour the Danish physicist Niels Bohr – bohrium and nielsbohrium – the latter favoured by the Germans who were concerned that bohrium could be confused with boron.

In 1986 the Transfermium Working Group was set up by the governing bodies of chemistry and physics (IUPAC and IUPAP respectively) to sort out the mess and after a decade-long slog it finally came up with a definitive list of names in 1997 – and bohrium (107) won out over nielsbohrium.

Atoms formed by superheavy elements have properties that are not predicted by their position in the periodic table

As for the future of the superheavy element hunters, Chapman writes that the best guess of physicists is that there could be as many as 172 elements – which means more than 50 could still be up for discovery. But Chapman also points out that discovering more and more heavy elements could be the undoing of the periodic table, bringing about the “end of chemistry”. While that might sound ominous, I’m afraid it doesn’t mean that chemistry students of the future can avoid learning how to balance redox reactions. What Chapman means is that the atoms formed by superheavy elements have properties that are not predicted by their position in the periodic table – a cornerstone of chemistry.

An early hint of this is that research using tiny numbers of copernicium (112) and flerovium (114) atoms at Dubna suggests that the element’s chemical properties are not as expected given its place in the periodic table. Flerovium, for example, should behave like lead, which is the element above it in the periodic table and copernicium should behave like mercury – but that is not what the study found. The likely reason is that these elements have huge charges on their nuclei and large numbers of electrons, so that conventional way of understanding how these elements react breaks down.

So rather than heralding the end of chemistry, the superheavy elements look set to open an exciting new chapter.

  • 2021 Bloomsbury Sigma £10.99pb 304pp

A new approach to Hall measurement

This video highlights the MeasureReady M91 FastHall, a revolutionary, all-in-one Hall analysis instrument that delivers significantly higher levels of precision, speed, and convenience to researchers involved in the study of electronic materials.

The M91 FastHall measurement controller combines all of the necessary HMS functions into a single instrument, automating and optimizing the measurement process, and directly reporting the calculated parameters. With Lake Shore’s patented new FastHall measurement technique, the M91 fundamentally changes the way the Hall effect is measured by eliminating the need to switch the polarity of the applied magnetic field during the measurement. This breakthrough results in faster and more accurate measurements, especially when using high field superconducting magnets or when measuring very low mobility materials.

Tracking traffic patterns in the mouse brain

Allen Institute researchers

The field of neuroscience has come a long way: beginning with single electrode electrophysiological recording of one neuron at a time and progressing to simultaneous recording of multiple neurons’ activity using tetrodes implanted in the brains of (typically) mice and monkeys. Today, advances in fluorescence microscopy and fluorescent protein engineering, combined with multielectrode recording, enable acquisition of the spatiotemporal details of electrical activity of neurons in vivo in real time.

In a recent study published in Nature, a team headed up at the Allen Institute went even further, recording the activity of hundreds of neurons at once to create the largest dataset of neurons’ electrical activity in the world.

Towards comprehensive mapping of brain activity

Joshua Siegle, Xiaoxuan Jia and colleagues took advantage of a previously developed technology called Neuropixels to achieve multichannel, large-volume, high-resolution spatiotemporal coverage of neuronal activity. Neuropixels is a silicon probe containing 384 recording channels; using only two probes, more than 700 well-isolated single neurons can be recorded simultaneously across different regions in the (mouse) brain.

The Allen Institute team, led by Shawn Olsen and Christof Koch, used Neuropixels to record activity from hundreds of neurons in up to eight different visual regions of the brain in awake, head-fixed mice viewing diverse visual stimuli. In contrast to sparse multichannel recording, localized large-volume coverage of several brain regions at once can reveal the information flow through the brain. In addition, capturing information from different areas in the brain simultaneously helps reveal how the brain operates through the interaction of these different areas.

Finding hierarchy during information processing

The team implanted Neuropixels probes up to 3.5 mm into the animals’ brains to measure responses from visual cortical (primary visual cortex and five higher cortical areas) and thalamic areas. During recording sessions, mice passively viewed a range of natural and artificial visual stimuli including drifting gratings and full field flashes. By measuring time delays in neuronal activity between different brain regions, as well as the size of visual field that each neuron responds to, the team observed that the information flow follows a hierarchical organization.

Neuropixels recordings

The researchers also carried out Neuropixels recordings in another set of mice that were trained to respond to a visual change. They found a similar hierarchical structure in activity during this behavioural task: neurons in visual areas higher in the hierarchy responded more strongly when the stimulus changed. The recordings enabled the researchers to infer the animal’s success in detecting a change in visual stimuli by just looking at the neuronal electrical activity. Interestingly, observing activity in the higher-order areas allowed the researchers to predict these successes with greater accuracy, suggesting that these areas are more likely to be involved in guiding behaviour.

In the absence of background light (removing the visual input from the mice), the same neurons still fired; however, the order of information flow was lost. This could mean that some sort of hierarchy is needed to process information and understand aspects of the world around us. Although mouse vision is not the same as humans, neuroscientists can still learn many working principles of sensory processing that are generalizable to how humans perceive and process information as some level.

“At a very high level, we want to understand why we need to have multiple visual areas in our brain in the first place,” says Siegle. “How are each of these areas specialized, and then how do they communicate with each other and synchronize their activity to effectively guide your interactions with the world?”

Qubit needle detected in a haystack of nuclear spins

Researchers at Cambridge University in the UK have found a new way to detect a single quantum bit (qubit) hidden in a dense cloud of 100 000 qubits made from the nuclear spins of quantum dots. The feat, which involves laser light and a single electron that acts like a spin-herding “sheepdog”, might aid the development of a quantum Internet.

Quantum computers can already outperform powerful supercomputers within a narrow range of highly specialized tasks. However, for quantum devices to reach their full potential, researchers need to find some way of networking them into a quantum Internet. One route being explored would involve storing quantum information in an ensemble of coherently interacting spins. Nuclear spins in semiconductor quantum dots – tiny pieces of semiconducting crystals that act like artificial atoms – are one promising possibility.

Hiding a qubit

The problem is that the quantum information stored in the nuclear spins of these dots – or indeed the spins of any other suitable material – is fragile. One way to overcome this is to protect the information-containing spins by “hiding” them in the cloud of spins from the 100 000 atomic nuclei within each quantum dot. An information-containing spin can then be thought of as a “needle” and the cloud of other spins as a “haystack”, explains team leader Mete Atatüre, a physicist in Cambridge’s Cavendish Laboratory.

Semiconducting quantum dots lend themselves to this type of subterfuge because researchers can inject a single nuclear spin that has been excited with laser light into an ensemble of nuclear spins in the dots. In the new work, the Cambridge team injected such a single nuclear excitation, known as a nuclear magnon, into dots made of indium gallium arsenide.

So far, so good. The real hurdle, Atatüre explains, came when he and his colleagues tried to sense the presence of the stored quantum information in the ensemble of spins. This proved difficult because the spins tend to “flip” randomly when addressed, creating a noisy system. The way they got around this challenge was to use a (proxy) electron in the material that acts “like a dog that herds sheep”, as Atatüre puts it. The sheep, in this case, are the ensemble of nuclear spins.

Collective nuclear spin excitation

In their experiments, the researchers measured the spin resonance frequency of their sheepdog electron with high precision using a laser technique known as Ramsey side-of-fringe. They then used this resonance frequency to detect the excitation state of an individual nuclear spin. The detection technique only works, however, if the chaotic ensemble of nuclear spins is first cooled down to ultralow temperatures (using another beam of laser light) so that the spins begin to act as a collective nuclear spin excitation — or spin wave – with a defined state.

Atatüre explains that this is because a single nuclear spin injected into a spin wave is far easier to pick out than a spin injected into a chaotic (non-cooled) ensemble of spins. “If we imagine our cloud of spins as a herd of 100 000 sheep moving randomly, one sheep suddenly changing direction is hard to see,” he says. “But if the entire herd is moving as a well-defined wave, then a single sheep changing direction becomes highly noticeable.”

By controlling the collective state of the 100 000 spins, the researchers were able to detect the existence of the stored quantum information as a flipped qubit with a precision as high as 1.9 ppm. For their apparatus, this precision represents the fundamental limit set by quantum mechanics.

Having harnessed control and sensing in such a large ensemble of nuclei, the Cambridge team says it now plans to demonstrate storage and retrieval of a full-blown arbitrary quantum bit using its technique. “Being able to do this will allow us to overcome a major building block for the quantum Internet: a deterministic quantum memory connected to light,” Atatüre tells Physics World.

The present work is detailed in Nature Physics.

Deep learning enables safer heart scans with lower radiotracer dose

De-noising PET scans

Positron emission tomography (PET) with the radiotracer 18F-FDG provides an important tool for assessing the health of the heart muscle in patients with ischemic heart disease, in which narrowed coronary arteries reduce the heart’s blood supply. Such PET scans help identify the level of damage to the heart muscle and play an important role in clinical decision making.

Current guidelines recommend injecting a 200–350 MBq dose of 18F-FDG. But lowering this tracer dose will decrease the patient’s radiation exposure – an essential goal of any diagnostic procedure – as well as reducing imaging costs and potentially opening up new applications. The downside, however, is that a lower tracer dose may lead to poorer quality images, thereby reducing diagnostic accuracy.

One approach proposed to address this problem is to employ artificial intelligence algorithms to restore image quality. Researchers from Rigshospitalet in Denmark have now investigated the use of deep learning to reduce noise in low-dose PET images. They validated the diagnostic accuracy of this approach using 18F-FDG images of patients with ischemic heart disease, detailing their findings in Physics in Medicine & Biology.

First author Claes Nøhr Ladefoged and colleagues retrospectively examined 168 patients referred for cardiac viability testing using 18F-FDG-PET/CT. Patients received approximately 300 MBq of 18F-FDG and one hour later underwent a low-dose CT scan followed by a thoracic PET scan.

The researchers reconstructed both static and ECG-gated (with eight gates) PET images. They also simulated dose-reduced images with 1% and 10% of the total counts, corresponding to tracer doses of 3 and 30 MBq, respectively. They then trained U-Net, a 3D convolutional neural network developed for biomedical image segmentation, to de-noise the four sets of dose-reduced PET images (static and gated data with the two dose reduction thresholds).

Clinical metrics

Diagnosis of patients with ischemic heart disease is based on several factors, including estimates of end-diastolic volume (EDV), end-systolic volume (ESV), left ventricular ejection fraction (LVEF) and FDG defect extent (deviation from inter-subject normal perfusion). Patients with normal myocardial perfusion usually have low extent scores and high LVEF, although there’s no specific threshold.

The researchers compared full-dose, dose-reduced and de-noised dose-reduced PET images from 105 patients. Using Corridor4DM software, which automatically segments the left ventricle, they extracted values of EDV, ESV and LVEF from the gated images, and FDG defect extent from the static images.

For EDV and ESV measurements, the full-dose and 1% dose-reduced PET images matched well, with a correlation coefficient of above 0.93, which increased to above 0.98 with de-noising. Significantly, for LVEF, de-noising increased this correlation from 0.73 to 0.89. In the 10% dose-reduced images, the team saw excellent correlation across all metrics with only minor improvements after de-noising. They note that none of the de-noised images were significantly different from the full-dose images.

The accuracy of diagnosis, based on European Society of Cardiology guidelines that define normal LVEF as 50% or above, improved after de-noising the dose-reduced images. When using the 1% dose-reduced images, 13 patients had a different diagnosis to that suggested by the full-dose measurement. De-noising improved this to just two patients. For the 10% dose-reduced images, five patients had discordant diagnosis before de-noising and all diagnoses agreed after de-noising.

The researchers note that the FDG defect extent score was, on average, only moderately affected by the dose reduction, with even the 1% dose-reduced images providing similar scores to the full-dose images. This is likely because this metric is measured from static PET images, in which all true coincidence events are used. In contrast, ESV and EDV measurements are taken from gated PET images, which only include one eighth of the counts in each gate, resulting in greater noise.

The reduced-dose images also exhibited a marked improvement in image quality after de-noising. Comparing standardized uptake value (SUV) measurements for the 1% dose-reduced and the full-dose static images showed considerable bias in the dose-reduced images. After de-noising, however, they exhibited near-identical SUV. SUV in the 10% dose-reduced images largely resembled those of the full-dose images, but were further improved using the de-noising model.

The researchers conclude that their deep-learning noise-reduction model enables significant 18F-FDG dose reduction in cardiac PET imaging without losing diagnostic accuracy. “A reduction to one hundredth of the dose is possible with quantitative clinical metrics comparable to that obtained with a full dose,” they write. “This dose reduction is important for patients, staff, general radiation protection and healthcare economy.”

Plasmonic metasurface gives high-speed optical WiFi a boost

Physicist and engineers at Duke University in the US have developed a new metamaterial that could substantially increase the speed of wireless optical communications. The material, which consists of an array of “nanoantennas” made from cubes of silver just 60 nm wide, can capture light within a 120-degree field of view and relay it into a narrow angle with a record-high efficiency of around 30%.

Although light in the visible and infrared part of the electromagnetic spectrum carries more information per unit time than the radio waves used in wireless technologies such as Bluetooth and WiFi, data transmission at visible and infrared wavelengths is currently restricted to fibre-optic cables. One reason for this is that wireless receivers must be able to capture light from different directions simultaneously. The simplest way to do this is to make the receivers physically bigger, but that reduces the speed at which they can transmit information onwards, thus lessening any advantage.

In 2016, researchers at the Connectivity Lab (a subsidiary of Facebook) developed a new type of receiver that could, in principle, be used for wireless communications at optical frequencies. Their device consisted of a spherical bundle of fluorescent fibres that captured blue light and re-emitted green light that could then be funnelled into a small receiver. However, it could only transmit two gigabits (Gb) of information per second, which compares poorly with standard fibre-optic providers (which typically offer around 10 Gbs) as well as high-end systems offering 1000s of Gbs.

Speeding up

A team led by Maiken Mikkelsen has now used the physics of surface plasmons to speed up the Connectivity Lab’s design. Plasmons are quasiparticles that arise when light interacts strongly with electrons in a nanostructured metal, causing the electrons to oscillate collectively. By adjusting the shape, size and arrangement of the nanoscale structures, the metallic material can be tailored to capture light at specific frequencies, increasing the device’s light-absorbing speed and light-emitting efficiency by a factor of more than 1000.

Mikkelsen and her colleagues made their plasmonic metasurface by depositing an array of silver nanocubes, spaced 200 nm apart, atop a thin (75 nm) silver substrate coated with a polymer containing four layers of fluorescent dye. The researchers report that the interaction of the nanocubes with the electrons in the substrate 7 nm below enhances the overall fluorescence of the dye by 910 times and its light emission rate by 133 times. Such values were previously only possible for isolated, highly optimized single nanostructures, not for whole arrays. “While we haven’t yet integrated a fast photodetector like the Connectivity Lab did in their original work, we have solved the major bottleneck in the design,” Mikkelsen says.

Centimetre-sized sample

The researchers also observe that the metasurface can collect fast-modulated light with a 3 dB bandwidth exceeding 14 GHz from a 120-degree field of view and relay it into a narrow angle with an overall efficiency of around 30%. This value, they say, is a record high “to the best of our knowledge”.

The researchers, who describe their experiments in Optica, say they can fabricate their metasurface over areas as large as centimetres using a simple technique known as liquid deposition without any loss in efficiency. They now plan to assemble several plasmonic devices together to cover a 360° field of view.

Copyright © 2026 by IOP Publishing Ltd and individual contributors