Skip to main content

Quantum mechanical wormholes fill gaps in black hole entropy

A new theoretical model could solve a 50-year-old puzzle on the entropy of black holes. Developed by physicists in the US, Belgium and Argentina, the model uses the concept of quantum-mechanical wormholes to count the number of quantum microstates within a black hole. The resulting counts agree with predictions made by the so-called Bekenstein-Hawking entropy formula and may lead to a deeper understanding of these extreme astrophysical objects.

Black hole thermodynamics

Black holes get their name because their intense gravity warps space-time so much that not even light can escape after entering them. This makes it impossible to observe what goes on inside them directly. However, thanks to theoretical work done by Jacob Bekenstein and Stephen Hawking in the 1970s, we know that black holes have entropy, and the amount of entropy is given by a formula that bears their names.

In classical thermodynamics, entropy arises from microscopic chaos and disorder, and the amount of entropy in a system is related to the number of microstates consistent with a macroscopic description of that system. For quantum objects, a quantum superposition of microstates also counts as a microstate, and entropy is related to the number of ways in which all quantum microstates can be built out of such superpositions.

The causes of black hole entropy are an open question, and a purely quantum mechanical description has so far eluded scientists. In the mid-1990s, string theorists derived a way of counting a black hole’s quantum microstates that agrees with the Bekenstein-Hawking formula for certain black holes. However, their methods only apply to a special class of supersymmetric black holes with finely tuned charges and masses. Most black holes, including those produced when stars collapse, are not covered.

Beyond the horizon

In the new work, researchers from the University of Pennsylvania, Brandeis University and the Santa Fe Institute, all in the US, together with colleagues at Belgium’s Vrije Universiteit Brussel and Argentina’s Instituto Balseiro, developed an approach that allows us to peek inside a black hole’s interior. Writing in Physical Review Letters, they note that an infinite number of possible microstates exists behind a black hole’s event horizon – the boundary surface from which no light can escape. Due to quantum effects, these microstates can slightly overlap via tunnels in space-time known as wormholes. These overlaps make it possible to describe the infinite microstates in terms of a finite set of representative quantum superpositions. These representative quantum superpositions can, in turn, be counted and related to the Bekenstein-Hawking entropy.

According to Vijay Balasubramanian, a physicist at the University of Pennsylvania who led the research, the team’s approach applies to black holes of any mass, electric charge and rotational speed. It could therefore offer a complete explanation of the microscopic origin of black hole thermodynamics. In his view, black hole microstates are “paradigmatic examples of complex quantum states with chaotic dynamics”, and the team’s results may even hold lessons for how we think about such systems in general. One possible extension would be to search for a way to use subtle quantum effects to detect black hole microstates from outside the horizon.

Juan Maldacena, a theorist at the Institute for Advanced Study in Princeton, US, who was not involved in this study, calls the research an interesting perspective on black hole microstates. He notes that it is based on computing statistical properties of the overlap of black hole pure states that are prepared via different processes; while one cannot compute the inner product between these different states, gravity theory, through wormhole contributions, makes it possible to compute statistical properties of their overlap. The answer, he says, is statistical in nature and in the same spirit as another computation of black hole entropy performed by Hawking and Gary Gibbons in 1977, but it provides a more vivid picture of the possible microstates.

Individual polyatomic molecules are trapped in optical-tweezer arrays

Individual polyatomic molecules have been trapped in arrays of optical tweezers for the first time. Researchers in the US were able to control individual quantum states of the three-atom molecules and the technique could find applications in quantum computing and searches for physics beyond the Standard Model.

Cooling molecules to temperatures near absolute zero is an exciting frontier in ultracold physics because it provides a window into how chemical processes are driven by quantum mechanics. For decades, physicists have been cooling atoms to ultracold temperatures. However, molecules are much more challenging to cool because they can hold energy in many more degrees of freedom (rotation and vibration) – and cooling a molecule requires removing the energy from all of these. Considerable success has been achieved with diatomic molecules, but the number of degrees of freedom grows steeply with every additional atom, so progress with larger molecules has been more limited.

Now, John Doyle, Nathaniel Vilas and colleagues at Harvard University have cooled individual triatomic molecules to their quantum ground states. Each molecule comprises a calcium, an oxygen and a hydrogen atom.

Linear geometry

“The main thing that we like about this molecule is that, in the ground state, it has a linear geometry,” explains Vilas, “but it has a low-lying excited state with a bent geometry…and that gives you an additional rotational degree of freedom.”

In 2022, a team including Vilas and Doyle laser cooled a cloud of these molecules to 110 μK in a magneto-optical trap. No one, however, has ever previously cooled individual molecules containing more than two atoms to their quantum ground states.

In the new work, Vilas and colleagues loaded their molecules from a magneto-optical trap into an array of six adjacent optical tweezer traps. They used a laser pulse to promote some of the molecules to an excited state: “Because this excited molecule is there there’s a much larger cross section for the molecules to interact,” says Vilas, “So there’s some dipole-dipole interaction between the ground state and excited state, that leads to inelastic collisions and they get lost from the trap.” Using this method, the researchers reduced the number of molecules in almost all the tweezer traps to just one.

Before they could proceed with imaging the molecules, the researchers had to decide what wavelength of light they should use for the optical tweezer. The central requirement is that the tweezer must not cause unintended excitation into dark states. These are quantum states of the molecule that are invisible to the probe laser. The energy structure of the molecule is so complex that many of the high-lying states have not been assigned to any motion of the molecule, but the researchers found empirically that light at a wavelength of 784.5 nm led to minimal loss.

Population accumulation

The researchers then used a 609 nm laser to drive a transmission from a linear configuration of the molecule in which the three atoms are in a line, to a vibrational mode in which the line bends. The molecules were left in a combination of three near-degenerate spin sublevels. By subsequently pumping the molecules with a 623 nm laser, they excited the molecules to a state that either decayed back to one of the original sublevels or to a fourth, lower-energy sublevel that did not absorb the laser. With repeated excitation and decay, therefore, the population accumulated in the lower sublevel.

Finally, the researchers showed that a small radio-frequency magnetic field could drive Rabi oscillations between two energy levels of the system. This could be hugely important for future research in quantum computing: “The geometry doesn’t have any bearing on this current work…We have these six traps and each one is behaving completely independently,” says Vilas. “But you can think of each one as an independent molecular qubit, so our goal would be to start implementing gates on these qubits.” It could even be possible to encode information in multiple orthogonal degrees of freedom, creating “qudits” that carry more information than qubits.

Other possibilities include searches for new physics.  “Because of the diverse structure of these molecules there’s coupling between the structure and different types of new physics – either dark matter or high-energy particles beyond the Standard Model, and having them controlled at the level we have now will make the spectroscopic methods way more sensitive,” says Vilas.

“It’s sort of a milestone in the field because it says we can control even single molecules that have more than two atoms,” says Lawrence Cheuk of Princeton University in New Jersey; “If you add a third atom, you get a bending mode, and this is very useful in certain applications. So in the same work, the Doyle group not only showed they can trap and detect single triatomics: they also showed that they can manipulate in a coherent manner the bending mode inside these triatomics.” He is intrigued about whether still larger molecules can be manipulated, opening up the study of features such as chirality.

The research is described in Nature.   

Mixing water and oil: no surfactants needed

Oil and water famously don’t mix – at least, not without adding a surfactant such as soap to coax them into a stable combination. Now, however, researchers in France and US have turned this conventional wisdom on its head by showing that they can, in fact, mix without a surfactant. The finding could have wide-reaching implications for industries that make heavy use of such mixtures, including food, cosmetics, health, paints and packaging to name just a few.

A mixture of two immiscible liquids such as water and oil is known as an emulsion. When an emulsion is shaken vigorously, one of its component liquids may disperse into small droplets within the other. But if the emulsion is left to stand, its components invariably separate out again.

The main driver of this separation is that as droplets of each liquid move closer to each other, they coalesce into ever-larger droplets. To prevent this, a third component may be added that is amphiphilic, meaning that it has an affinity for the interface between the mixture’s two components. Today’s industrial emulsions rely on the use of such materials, which are termed surfactants. However, many surfactants are toxic for both humans and the environment. Reducing their use (or doing away with them altogether) would therefore be highly beneficial.

A counter-intuitive phenomenon

In the latest work, researchers from the Colloïdes et Matériaux Divisés Laboratory at the ESPCI in Paris, France; the French company Calyxia, which specializes in the design and manufacture of biodegradable microcapsules; and Harvard University in the US studied mixtures composed solely of water and various types of oil. Within these normally immiscible mixtures, they observed ultrathin but abnormally stable films of oil spontaneously appearing between the dispersed droplets of water.

“This phenomenon systematically induces adhesion between the droplets while preventing them from coalescing, so allowing us to disperse large proportions of water (80% by volume or more) in oil,” explains Jérôme Bibette, the chemical physicist and ESPCI laboratory director who led the research.

Stable over several weeks

The phenomenon, which is detailed in Science, works best for highly polar oils that contain both hydrophilic and hydrophobic components and have a high molecular weight. These criteria exclude aliphatic hydrocarbons such as methane and polyethylene, for example, but include oils containing alternating oxygen and carbon atoms – a category that encompasses all vegetable oils.

The researchers found that these oils can change their configuration as soon as they are confined between two drops of water by “choosing” to preferentially locate their hydrophilic parts towards the water and the hydrophobic parts away from it. “The ultrathin adhesive film induced by the affinity of the hydrophobic parts develops spontaneously as soon as the two interfaces approach,” Bibette says. “The film then acquires an enormous viscosity while reducing the free energy of the interface – something that manifests itself by the water and oil drops adhering together.”

Such spontaneous gelling between two immiscible liquids had never been observed before, he adds.

Since most vegetable oils can be polymerized, combining them with water could allow researchers to make perfectly biodegradable polymeric materials. For Bibette, one of the most obvious applications that springs to mind is biodegradable capsules for industries such as cosmetics and fragrances.

The technique could also allow researchers to create new types of plastics comprising biodegradable polymers and up to 90% water by volume, he tells Physics World. “Both phases could be made (and controlled to be) homogenous throughout the entire mixture, which could allow us to produce a unique bi-continuous, coexisting hydrophilic and hydrophobic material,” he says. “This could have applications in areas as diverse as tissue engineering, biodegradable packing and materials for replacing plastics in general.”

Sound and vision: synchrotron insights illuminate crystal nucleation and growth

Curiosity-driven research using low-power ultrasound fields to investigate the fundamental physics of crystal nucleation – the formation of crystal nuclei and “embryos” in the liquid or solution phase prior to macroscopic crystal growth – is opening a pathway to new, industrially significant methods of process control for crystallization.

Although it’s still relatively early days, scientists from the University of Leeds, UK, are confident their experimental and theoretical insights will ultimately translate into downstream process-equipment innovation. The end game: at-scale commercial opportunities to realize less-energy-intensive modes of materials production – as well as enhanced quality control – across industries as diverse as food manufacturing, pharmaceuticals, agrochemicals, polymer extrusion and personal care products.

The specialist programme on so-called “insonification” is headed up by Megan Povey, professor of food physics at Leeds, who has built an international reputation in the application of ultrasound spectroscopy for food characterization and ultrasound processing in food manufacturing. More widely, her team’s priorities span computer and mathematical modelling of foods; commercially deployable sensors and instrumentation for safer foodstuffs; and novel process technologies for sustainable production. All of that built on a solid fundamental understanding of materials properties, structure and behaviour.

Unpacking the fundamentals of food

Povey’s latest scientific endeavour is true to those core research themes. On the one hand, her team is developing granular mathematical-physical models – based on the rectification of heat and mass transport – to understand how low-power ultrasound influences the behaviour of a wide range of nucleating systems. “Everything I do in food physics, I need a theoretical underpinning – a model – before moving onto the experimental aspects,” Povey explains. “After all, empiricists need more than empiricism. They need physical models they can iterate and optimize with real-world experimental data.”

Along a parallel coordinate, Povey and colleagues are pursuing an experimental line of enquiry that relies on low-power ultrasound to control crystal nucleation – in effect, the insonification of a solution or liquid without inducing cavitation (i.e. formation of small vapour-filled bubbles or voids that can collapse and generate shock waves within the fluid medium). In this context, low power is defined by a mechanical index (MI) of 0.08 or less, a measure of the maximum amplitude of the ultrasonic pressure pulse (and sufficiently low as to minimize the likelihood of cavitation).

“By controlling the ultrasound frequency, power and duration according to the nature of the crystallizing material, we have shown that it is possible to promote or suppress crystal formation,” notes Povey. “Equally, the level of control we see is much more granular and extends to the rates of nucleation and crystallization as well as the numbers, sizes, geometries [habits] and morphology of the crystals in emergent networks.”

The upsides for industry, she thinks, could be game-changing. “Think faster nucleation and uniform nucleation throughout the sonicated volume as well as the generation of smaller, purer and more uniform crystals.” A case in point is the production of pharmaceutical “actives”, where control of polymorph (a single chemical species that can exist in different crystal structures that may change their chemical and physical properties) is often crucial. “The dreadful example of the thalidomide affair highlights the dangers inherent in the production of the wrong polymorph,” she adds.

Diamond lights up crystal nucleation

If that’s the back-story, what of the experimental detail? Front-and-centre in this regard are the big-science capabilities of Diamond Light Source, the UK’s national synchrotron research facility (located at the Harwell Science and Innovation Campus, Oxfordshire). Globally significant, Diamond is among an elite cadre of large-scale X-ray sources that is shedding light on the structure and behaviour of matter at the atomic and molecular level across all manner of fundamental and applied disciplines – from clean-energy technologies to pharma and healthcare; from food science to structural biology and cultural heritage.

Megan Povey and Andy Price

Over the past decade, Povey and her team have been regular visitors to Diamond’s I22 beamline which, since entering operation in 2007, has hosted a dedicated programme for soft-matter and polymer research as well as activities in biological materials and environmental science. At I22, for example, the Leeds team is able to conduct X-ray diffraction (XRD) studies on a multipurpose instrument that combines small-angle and wide-angle X-ray scattering (SAXS/WAXS) modalities. The beamline also comprises a versatile sample platform to support in operando experiments – following structural evolution in solutions and melts, for example, over timescales from milliseconds to minutes.

In terms of core specifications, the I22 insertion device delivers X-rays to the sample with energies between 7 and 22 keV (and a beam size of 240 × 60 microns for the main beamline). “The simultaneous recording of both SAXS and WAXS data in tandem means we can probe all length scales with high resolution – from a few angstroms up to the mesoscale at several hundred nanometres [and billions of molecules],” explains Povey. “Using a specially designed acousto-optic cell on the I22 beamline, we have accumulated experimental evidence for two-stage crystal nucleation as well as the impact of non-cavitational ultrasound on each step in the nucleation process.”

A case in point is a series of XRD studies tracking the crystallization of a wax (eicosane) from an organic solvent in the presence and absence of an insonifying ultrasound field. The goal: to investigate the effects of insonification both on the long-range order of eicosane molecules (via SAXS) and on the nanoscale molecular packing (using WAXS). In this way, Povey and colleagues have been able to identify mesoscale effects due to insonification that are absent in the quiescent fluid. The SAXS/WAXS investigations also enabled the Leeds team to characterize – and follow dynamically – the size of the regimes that precede the crystal nucleation step (in advance of initial crystal embryos transitioning into uncontrolled crystal growth).

“We’ll start with the wax emerging out of solution, for example, and follow that process at around 5-6 frames per second,” Povey explains. What they see in the first instance is the emergence of long-range order in the liquid under the influence of sonication. Then, in an increasingly saturated solution, this long-range order transitions into a phase separation in the so-called “dead zone”, which hosts the first stage of nucleation before the formation of crystal embryos. “At all stages,” she adds, “the low-power ultrasound can alter the molecular ordering and we see those effects unfold like a movie in real-time on I22.”

We think our insonification technique could rewrite the rules in injection moulding – reducing waste, reducing cost and increasing versatility in favour of sustainability

Megan Povey

Complementing the I22 SAXS/WAXS experiments, Povey and graduate student Fei Sheng have also used pulse-echo ultrasound techniques (pulse width of the order of 5 μs) to quantitatively monitor the behaviour of crystal embryos in supersaturated solutions (i.e. containing more than the maximum amount of solute that is capable of being dissolved at a given temperature). Using ultrasound to probe an aqueous copper sulphate sample in the acousto-optic cell, they were able to measure the coming into existence and subsequent disappearance of solid material associated with crystal embryos.

It’s this ability to monitor and control emergent crystal nuclei in the dead zone – where crystallization behaves like a casino in the absence of acoustic control – which has the potential to transform a wide range of industrial processes. One near-term commercial opportunity already under discussion with industry partners is the formation of plastic parts by injection moulding – traditionally an energetically expensive and sometimes hit-and-miss process. “We think our insonification technique could rewrite the rules in injection moulding – reducing waste, reducing cost and increasing versatility in favour of sustainability,” claims Povey.

Out of the lab, into the factory

Meanwhile, the applied R&D effort is addressing other aspects of technology translation – notably the integration of Povey’s theoretical framework for insonification and crystal nucleation with dissipative particle dynamics (DPD) computational modelling (a mesoscopic simulation technique that’s relevant across a variety of complex hydrodynamic phenomena). The motivation here is to develop a predictive method capable of modelling the impact of low-power ultrasound fields on a wide range of nucleating systems – and, by extension, control crystal formation reliably and repeatably.

Activity on the DPD front is led by Lewtas Science and Technologies, a UK consultancy specializing in advanced materials, working in collaboration with the Hartree National Centre for Digital Innovation, a UK outfit that supports technology transfer and commercialization in advanced computing and software.

Significantly, Povey and Ken Lewtas, a polymer scientist who heads up the eponymous consultancy, have also filed an international patent to protect the intellectual property around the use of insonification in a range of industrial contexts, including (but not limited to) the tempering of chocolate (the process of slowly heating and then cooling chocolate so that the fat molecules crystallize into chocolate with the desirable properties of gloss, snap and cooling in the mouth); the crystallization of thermoplastic polymers (to control mechanical, optical or barrier properties); and even the waxing of diesel fuels and heating oils (which can impact fuel flows at low temperatures).

“Our hope,” concludes Povey, “is that industry partners will, sooner rather than later, be in a position to routinely apply our insonification technique and low-power ultrasound to promote or suppress crystallization across diverse production processes.”

The secrets of success in synchrotron science

Nick Terrill is the principal beamline scientist for Diamond’s I22 multipurpose SAXS/WAXS facility. Here he tells Physics World how his team of five staff scientists is supporting the University of Leeds food physics programme in sonocrystallization.

Nick Terrill, Principal Beamline Scientist

How much planning goes into a multiyear research effort like this?

Our interaction with Megan and colleagues starts well in advance of their in situ beam-time at I22. As such, the requirements-gathering involves virtual and on-site meetings over a period of several months to ensure we’re all talking the same language and that the experimental set-up on the beamline is optimized to deliver the data they need, when they need it. There are no short-cuts, just exhaustive preparation: it takes a lot of planning and iteration to ensure the scientific users get good-quality results while they’re here at I22 for the three or four days of experiments.

Presumably there’s a lot of focus on system integration?

Correct. In this case, we spent a lot of time working with Megan and the team to figure out how to integrate their ultrasound instrumentation and acousto-optic sample cell into the beamline such that they didn’t compromise SAXS/WAXS data collection. I22’s dedicated Sample Environment Development Laboratory (SEDL) is crucial in this regard – basically an offline carbon-copy of the main beamline without the X-rays. Thanks to SEDL, external scientists can bring along their specialist kit – in this case, the ultrasound and acousto-optic subsystems – and work closely with the I22 team to ensure the hardware/software integration is as good as it can be prior to running live experiments.

What’s the secret of a successful collaboration between your team and the I22 end-users?

Our job is translate external users’ scientific objectives into realistic experiments that will run reliably on the beamline. You can only achieve that with open dialogue and two-way collaboration. With Megan’s team, we had to triangulate to make sure a range of modalities work seamlessly together – ultrasound diagnostics, ultrasound excitation and XRD data collection. The best collaborations are always a win-win, in that we also learn a lot of lessons along the way. That learning is key to our continuous improvement as a team and the ongoing scientific support we offer to all our I22 end-users.

Further reading

M J Povey et al. 2023 ‘Sounding’ out crystal nuclei – A mathematical-physical and experimental investigation J. Chem. Phys. 158 174501

Single-cell nanobiopsy explores how brain cancer cells adapt to resist treatment

Infographic of a double-barrel nanopipette

Glioblastoma (GBM) is the deadliest and most aggressive form of brain cancer. Almost all tumours recur after treatment, as surviving cells transform into more resilient forms over time to resist further therapies. To address this challenge, scientists at the University of Leeds have designed a novel double-barrel nanopipette and used it to investigate the trajectories of individual living GBM cells as they change in response to treatment.

The nanopipette consists of two nanoscopic needles that can simultaneously inject exogenous molecules into and extract cytoplasm samples from a cell. The nanopipette is integrated into a scanning ion conductance microscope (SICM) to perform nanobiopsies of living cells in culture. Unlike existing techniques for studying single cells, which usually destroy the cell, the nanopipette can take repeated biopsies of a living cell without killing it, enabling longitudinal studies of an individual cell’s behaviour over time.

Writing in Science Advances, the researchers explain that SICM works by measuring the ion current between an electrode inserted in a glass nanopipette and a reference electrode immersed in an electrolytic solution containing the cells. Nanobiopsy is performed when an ion current flows through the nanopore at the tip of the nanopipette after applying a voltage between the two electrodes. In their double-barrel nanopipette, one barrel acts as an electrochemical syringe to perform cytoplasmic extractions; the second contains an aqueous electrolyte solution that provides a stable ion current for precise positioning and nanoinjection prior to nanobiopsy.

The semi-automated platform enables extraction of femtolitre volumes of cytoplasm and simultaneous injection into individual cells. The platform provides automated positioning of the nanopipette using feedback control (the ion current drops when the nanopipette approaches the sample), while detection of particular current signatures indicates successful membrane penetration of a single cell.

Longitudinal studies

As a proof-of-concept of the platform’s ability, the researchers conducted longitudinal nanobiopsy of a GBM cell (and its progeny), profiling gene expression changes over 72 h. They performed nanobiopsy prior to therapy, during treatment with radiotherapy and chemotherapy, and post treatment.

“Our method is robust and reproducible, allowing membrane penetration and nanoinjection across different cell types with distinct mechanical properties,” write co-principal investigators Lucy Stead and Paolo Actis. “The average success rate of nanoinjection is 0.89 ± 0.07. Intracellular mRNA is then extracted.”

The researchers investigated the response of GBM cells to the standard treatment of 2 Gy of radiation and 30 µM of temozolomide. They visually tracked individual cells and their progeny over 72 h, with 98% remaining in the microscope’s field-of-view during this time frame – an important factor when aiming to perform longitudinal analysis.

Fluorescence images of brain cancer cells

On day 1, the researchers biopsied, injected with a fluorescent dye and imaged each cell. On day 2, half of the cells received irradiation and chemotherapy, while the others served as controls. All cells were imaged on day 2 and 3, and biopsied and injected again on day 4.

In cells that underwent day-1 nanobiopsies, survival was similar between treated and untreated cells, and cell division rates were comparable in the two groups. After 72 h, 63% of untreated control (not biopsied) cells survived, compared with 25% of the treated, biopsied cells. There was no difference in the subsequent death rates of cell subtypes at day 1, irrespective of treatment. However, a much larger proportion of untreated cells switched subtype over time, or produced progeny with a different subtype, than the treated cells.

“This suggests that untreated cells are significantly more plastic over the three-day time course than treated cells,” the researchers write. “The cell phenotype scores of paired day 1 and longitudinal samples revealed that treated cells tend to maintain the same phenotype during therapy, while untreated cells are more likely to switch transcriptional state over 72 h, suggesting that treatment either induces or selects for high transcriptional stability in this established GBM cell line.”

“This is a significant breakthrough,” says Stead. “It is the first time that we have a technology where we can actually monitor the changes taking place after treatment, rather than just assume them. This type of technology is going to provide a layer of understanding that we simply never had before. And that new understanding and insight will lead to new weapons in our armoury against all types of cancer.”

The team is convinced that the ability of these versatile nanoprobes to access the intracellular environment with minimal disruption holds potential to “revolutionize molecular diagnostics, gene and cell therapies”.

“Our future work will focus on increasing the throughput of the technology so that more cells can be analysed,” Actis tells Physics World. “We are working to improve the protocols for analysing the RNA extracted from cells so that more biological information can be gathered. We are also very keen to study more advanced biological models of brain cancer based on patient-derived cells and organoids.”

Keith Burnett: ‘I have this absolute commitment that the broader we are, the more powerful physics will be’

Founded in 1920, the Institute of Physics has had some high-flying presidents over the years. Early luminaries included Ernest Rutherford, J J Thomson and Lawrence Bragg, while more recently the presidency has been held by the likes of Jocelyn Bell Burnell, Julia Higgins and Sheila Rowan. The current incumbent is Keith Burnett, an atomic physicist who spent more than a decade as vice-chancellor of the University of Sheffield in the UK.

He studied at the University of Oxford and worked at the University of Colorado at Boulder and Imperial College London, before returning to Oxford, where he was head of physics in the mid-2000s. But despite a career spent almost entirely at top universities, Burnett is not a distant, elite figure. He grew up in the valleys of South Wales and revels in the fact that his cousin Richie Burnett was World Darts Champion in 1995.

Physics World caught up with Burnett to find out more about his career and vision for physics.

What originally sparked your life-long interest in physics?

I grew up in a mining valley in South Wales, which was a wonderful place with a really cohesive community. It was at the time of the Apollo space programme – oh my god, the excitement. You could see the possibilities and I was fascinated by the idea of space. But one thing I did have was a wonderful teacher in school – Mr Cook. Also, my father worked for a small engineering company that made ceramics. So I just loved the idea of science from the very beginning.

You went on to study at Oxford, where you did a PhD in atomic physics. What attracted you to that field?

I had absolutely wonderful undergraduate lecturers and teachers – one being another Welshman, Claude Hurst. There was also Colin Webb, who later started Oxford Lasers. He was an amazing undergraduate teacher at Jesus College and he really inspired me. In fact, he then passed me on to one of his buddies, Derek Stacey. The group had been founded by Heini [Heinrich] Kuhn, who was an emigré scholar from Germany, and had a wonderful tradition in precision atomic physics.

Did the commercial side of physics ever appeal in terms of your own career?

Not so much, but I did really admire what Colin was doing because he was very early in terms of commercialization. People wanted the type of excimer lasers he was making in the lab. In fact I just got an e-mail from him. He’s retired but very pleased that Oxford Lasers has won a good contract for doing semiconductor work. So I very much admire the applications of lasers and optics.

You were around in the 1990s at the time Bose–Einstein condensation was first observed in the lab. It was a peak period for atomic physics wasn’t it?

I was actually on the search committee that hired Carl Wieman to [the University of Colorado at] Boulder, where I was an assistant professor at the time. Carl joined the faculty and worked with Eric Cornell to make a Bose–Einstein condensate. I was tracking that very closely. It was an absolutely wonderful time because it went from “No-one thinks you can make it” to “Maybe they’ve made it” and then “Wow, it’s really big and juicy and we can do great stuff with it.”

Would you say Eric Cornell and Carl Wieman were worthy winners of the Nobel Prize for Physics in 2001?

Yes. They won it with Wolfgang Ketterle. It was a remarkable story with twists and turns because the person who developed the ideas behind [laser] cooling was Dan Kleppner at MIT. He was the first to develop hydrogen cooling with Tom Greytak. But what is really important is that the people at MIT taught other people elsewhere how to do it. Because of that, they progressed much faster and were able to learn from one another. It shows that if you don’t have trust and the ability to exchange ideas, everything slows down.

My cousin Richie was World Darts Champion in 1995. He’s the really well-known Burnett in in the valley. Not me!

Keith Burnett

After spells at Imperial College and then back at Oxford, you became vice-chancellor at the University of Sheffield. How did that come about?

I was about 49 when they said “Will you be head of physics at Oxford?” And I thought “Yeah, that’ll be amazing!” So I did that and it was very perplexing but wonderful – an amazing department. I did that for a year. But the person who inspired me [to move to Sheffield] was actually an ex-president of the IOP – and the previous vice-chancellor of Sheffield – Gareth Roberts [who died in 2007]. He’s another Welshman, though from north Wales, which is very different from south Wales – they play soccer, not rugby – but still Welsh. I was very poor at rugby. But my cousin Richie was World Darts Champion in 1995. He’s the really well-known Burnett in in the valley. Not me!

So what did Gareth Roberts say to you?

Well, I’d worked with Gareth at Oxford and he said “You should really think about it.” Sheffield is a city bathed in the traditions of making steel and metallurgy. So I thought I would love being part of the civic life of the city. I also felt this was a university that does wonderful things for its citizens and students. The other thing is that my daughter had gone to Sheffield before me – she’s an architect there so I always say I follow in my daughter’s footsteps.

As vice-chancellor at Sheffield, you were firmly opposed to the principle of student tuition fees. Why was that?

Higher education is not just for the individual. It has consequences for society and for business too. If you say “No, it’s just an individual choice whether someone goes to university and pays a fee”, well that can work to a certain extent. But you cannot then be sure you’ll have enough scientists to work in, say, industry or defence. As a country, we used to roughly balance the system in terms of where people went. But now it’s a free-for-all in terms of choice, which is bad if we need more people in science and engineering. Tuition fees also fundamentally change the relationship with students. I disagreed with fees when they came in and I still disagree with them now.

The UK university sector has expanded hugely over the last 20 years thanks to a huge rise in student numbers and the trebling of tuition fees in 2012. Has that been good or bad?

The big thing that happened during my time at Sheffield was the increase in student tuition fees [to £9000]. I was very much against the increase, which wasn’t a popular [position to hold] among many of my vice-chancellor colleagues. In fact, I remember being pressured by Number 10 to sign a letter with other Russell Group universities to support the rise. I knew it was going to be a major burden on households and we’re now in a situation where the UK has to write off £12bn [from students who never earn enough to pay their loans back]. We’ve got a very bad investment portfolio and the students have got debt. It’s been a disaster.

Large rectangular building with a glass facade divided into different-sized diamond shapes

Tuition fees haven’t risen for more than a decade now and many universities have come to rely on the much higher fees paid by international students. How has the growth in foreign students affected the higher-education sector?

International student fees used to be a top-up. When I was at Sheffield, we used them to build a new engineering teaching lab, known as the Diamond. But nowadays the income from international students is pretty much built into the fabric – in other words, without their fees you can’t run a university. We have some amazing physics departments in this country, but the tap that feeds them is actually undergraduate physicists, cross-subsidized by international students, especially from business schools, international relations and engineering. As a country, we need physics properly funded and less reliant on foreign students.

If you look at a place like Sheffield, students bring enormous benefits – vitality, money, inward investment

Keith Burnett

The rise in international students has also played a role in increasing immigration to the UK. Where do you stand in that debate?

If you look at a place like Sheffield, students bring enormous benefits – vitality, money, inward investment. Others may say “No, we don’t like students taking accommodation” and things of that sort. If you talk to experts in immigration, it’s far more neutral than people think. But the whole topic is inflammatory and it’s difficult to get a balanced discussion of the advantages and disadvantages. There are, though, some incredible physics departments in the UK – look at the number of companies working with the University of Bristol in its quantum tech. This is a big potential business long term.

After Sheffield, you became involved in the Schmidt Science Fellows scheme – what’s that all about?

It was an idea of [the US computer scientist] Stu Feldman, a long-term confidant of Eric and Wendy Schmidt – Eric being a former chief executive and chair of Google. Stu said “There ought to be a way in which people, once they’ve done their PhDs, can think more broadly rather than just carrying on in a particular thing.” How, in other words, can we identify people across the world who’ve got fantastic ideas and then give them some freedom to move? So we – our team at Rhodes House in Oxford – select people with exciting ideas and help them choose where they might go in the world.

What’s your role in the scheme?

My job is to mentor researchers in making this transition. Initially, I did all of the mentoring but now I have some colleagues. It can be all the way from handling financial issues to dealing with principal investigators to writing faculty applications. Over the last six years we’ve helped about 120 people across the world in different institutions. Some are now in national labs, while others have set up their own businesses. For me, it’s the most wonderful job because I get to hear the issues that early-career scientists have, such as using machine learning in all sorts of things – imaging biomolecules, precision drugs, everything.

What are the main challenges facing early-career researchers?

First and foremost, salaries. I think we’re in grave danger of underpaying our early-career scientists. We also need to do more to help people with their work–life balance. The Schmidt programme does have generous parental leave. There’s also the question of supporting and promoting people who work in interdisciplinary areas.

Three photos: a teacher and pupils with a robot; quantum computing abstract; pedestrians walking towards the UK Houses of Parliament

In October 2023 you started your two-year stint as IOP president. What are your priorities during your term in office?

The IOP has just launched its new five-year strategy and the big focus is the skills base of teachers and researchers. First, are we helping teachers enough – the people who help people get into physics? We need a strong pipeline of talent because physicists don’t just stay in academia, they move into finance, industry, policy.

Second, we are very interested in influencing science – especially the green economy. We have to explain that it’s physicists – working with engineers and chemists – who are at the core of efforts to tackle climate change.

We’re also thinking more about how to make membership of the IOP more useful and accessible. It’s not arrogance to think that someone with an awareness of physics is just that much better prepared for lots of things going on in the modern world.

How can members of the IOP get involved with helping put that strategy into practice?

Start by looking at the strategy, if you haven’t already. If you’re a member of a particular group or branch, then feed your ideas back to your representatives. Our influence as an institute is much more powerful if we’re the convenors and co-ordinators of a more general effort. We can’t do all the things, but our membership is big and strong. If you can’t find somebody, contact me.

You feel strongly about the need for the physics community to be more diverse. How do you see physics evolving and over the next few decades?

There’s a wonderful book, After Nativism, that just came out by Ash Amin, who’s a trustee of the Nuffield Foundation, which I chair. He argues that many of the things needed to make a just, equitable and diverse society are not being advocated, with many parts of society backing away from these issues. But the younger generation is utterly committed to a future that’s more just, equitable and diverse. They’ve grown up freer of prejudice but also used to discussing these things more openly. They’re not interested in many of the divisions that people would see in terms of labels of any sort. Any labelling of people due to race, ethnicity, sexual proclivity – anything at all – is an anathema and I personally find that inspirational. I really find that inspirational.

As a profession, we are a long way off equity and have great deficits in terms of inclusion

Keith Burnett

How can the IOP help with such issues?

One of the things that the IOP can do is say “Well, what are the advantages of a society of that sort?” Some people may accuse us of being a bunch of “woke liberals”. We’re not. We’re just people who believe in justice and equity in society. But we’re going have to work for it because, as a profession, we are a long way off equity and have great deficits in terms of inclusion. Going forward, we will have a younger generation who will care much less about these issues because they won’t see them. In fact, they’ll find it very strange that there was a time when the IOP didn’t represent society as a whole.

What are the benefits of a more equitable and inclusive physics community?

The advantages are huge. You know, if you exclude groups of people because of the labels you attribute to them, you’re “deleting” people who could be powerful, influential and helpful for physics. You’re just wasting people. I have this absolute commitment that the broader we are in terms of our people, the better, the more just and the more powerful we will be. I think our community wants that. Some won’t; some people might have a more traditional view of what society is. But it’s our duty and our incentive to say why we want a more just society – after all, it’s smarter, more powerful, more fun.

New photovoltaic 2D material breaks quantum efficiency record

Convention solar cells have a maximum external quantum efficiency (EQE) of 100%: for every photon incident on the cell, they generate one photoexcited electron. In recent years, scientists have sought to improve on this by developing materials that “free up” more than one electron for every photon they absorb. A team led by physicist Chinedu Ekuma of Lehigh University in the US has now achieved this goal, producing a material with an EQE of up to 190% – nearly double that of silicon solar cells.

The team made the new compound by inserting copper atoms between atomically thin layers of germanium selenide (GeSe) and tin sulfide (SnS). The resulting material has the chemical formula CuxGeSe/SnS, and the researchers developed it by taking advantage of so-called van der Waals gaps. These atomically small gaps exist between layers of two-dimensional materials, and they form “pockets” into which other elements can be inserted (or “intercalated”) to tune the material’s properties.

Intermediate bandgap states

The Lehigh researchers attribute the material’s increased EQE to the presence of intermediate bandgap states. These distinct electronic energy levels arise within the material’s electronic structure in a way that enables them to absorb light very efficiently over a broad spectrum of solar radiation wavelengths. In the new material, these energy levels exist at around 0.78 and 1.26 electron volts (eV), which lie within the range over which the material can efficiently absorb sunlight.

The material works particularly well in the infrared and visible regions of the electromagnetic spectrum, producing, on average, nearly two photoexcited charge carriers (electrons and holes bound in quasiparticles known as excitons) for every incident photon. According to Ekuma, such “multiple exciton generation” materials can serve as the active layer within solar cell devices, where their performance is fundamentally governed by exciton physics. “This active layer is crucial for enhancing the solar cell’s efficiency by facilitating the generation and transport of excitons in the material,” Ekuma explains.

Further research needed for practical devices

The researchers used advanced computational models to optimize the thickness of the photoactive layer in the material. They calculated that its EQE can be enhanced by making sure that it remains thin (in the so-called quasi-2D limit) to prevent quantum confinement losses. This is a key factor that affects efficient exciton generation and transport through a process known as nonradiative recombination, in which electrons and holes have time to recombine instead of being whisked apart to produce useful current, Ekuma explains. “By maintaining quantum confinement, we preserve the material’s ability to effectively convert absorbed sunlight into electrical energy and operate at peak efficiency,” he says.

While the new material is a promising candidate for the development of next-generation, high-efficient solar cells, the researchers acknowledge that further research will be needed before it can be integrated into existing solar energy systems. “We are now further exploring this family of intercalated materials and optimizing their efficiency via various materials engineering processes to this end,” Ekuma tells Physics World.

The study is detailed in Science Advances.

Nanofluidic memristors compute in brain-inspired logic circuits

A memristor that uses changes in ion concentrations and mechanical deformations to store information has been developed by researchers at EPFL in Lausanne, Switzerland. By connecting two of these devices, the researchers created the first logic circuit based on nanofluidic components. The new memristor could prove useful for neuromorphic computing, which tries to mimic the brain using electronic components.

In living organisms, neural architectures rely on flows of ions passing through tiny channels to regulate the transmission of information across the synapses that connect one neuron to another. This ionic approach is unlike the best artificial neural systems, which use electron currents to mimic these synapses. Building artificial nanofluidic neural networks could provide a closer analogy to real neural systems, and could also be more energy-efficient.

A memristor is a circuit element with a resistance (and conductance) that depends on the current that has previously passed through it – meaning that the device can store information. The memristor was first proposed in 1971, and since then researchers have had limited success in creating practical devices. Memristors are of great importance neuromorphic computing, because they can mimic the ability of biological synapses to store information.

In this latest research, EPFL’s Théo Emmerich, Aleksandra Radenovic and their colleagues made their nanofluidic memristors using a liquid blister that expands or contracts when currents of solvated ions flowed into or out of it, changing its conductance.

Iconic and ionic

In 2023, researchers took a significant step toward ion-based neuromorphic computing when they discovered memory effects in two nanofluidic devices that regulated ion transport across nanoscale channels. When subjected to a time-varying voltage, these devices displayed a lagging change in current and conductance. This is a memristor’s characteristic “pinched” hysteresis loop. However, the systems had weak memory performance, and were delicate to fabricate. Furthermore, the mechanism responsible for the memory effect was unclear.

But this has not deterred the EPFL team, as Emmerich explains: “We wanted to show how this nascent field could be complementary to nanoelectronics and could lead to real-world computing applications in the future”.

To create their device, the EPFL researchers fabricated a 20 micron-by-20 micron silicon nitride membrane atop a silicon chip, with a 100 nm-diameter pore at its centre. On this chip, they deposited 10-nm-diameter palladium islands around which fluid could flow, by using evaporative deposition techniques. Finally, they added a 50–150 nm thick graphite layer, to create channels that led to the pore.

Tiny blister

Upon dipping the device into an electrolyte solution and applying a positive voltage (0.4–1.0 V), the researchers observed the formation of a micron-scale blister between the silicon nitride and the graphite above the central pore. They concluded that ions travelled through channels and converged at the centre, increasing pressure there and leading to blister formation. This blister acted as a resistive “short circuit” that increased the device’s conductance, placing it in the “on” state. Upon applying a negative voltage of the same magnitude, the blister deflated and the conductance decreased, placing the device in the “off” state.

Because the blister took time to deflate following the voltage shut-off, the device remembered its previous state. “Our optical observation showed the mechano-ionic origin of the memory,” says EPFL’s Nathan Ronceray.

Measurements of the current flowing through the device before and after the voltage reset showed that the device operated with a conductance ratio up to 60 on a timescale of 1–2 s, indicating a memory effect two orders of magnitude greater than previous designs. Emmerich adds, “This is the first time that we observe such a strong memristive behaviour in a nanofluidic device, which also has a scalable fabrication process”.

To create a logic circuit, the team connected two of their devices in parallel to a variable electronic resistor. Both devices thus communicated together through this resistor to achieve a logic operation. In particular, the switching of one device was driven by the conductance state of the other.

Logical communication

Until now, Emmerich says, nanofluidic devices have been operated and measured independently from each other. He adds that the new devices “can now communicate to realize logic computations.”

Iris Agresti, who is developing quantum memristors at the University of Vienna, says that while this is not to the first implementation of a nanofluidic memristor, the novelty is showing how multiple devices can be connected to perform controlled operations. “This implies that the behaviour of one of the devices depends on the other,” she says.

The next step, the EPFL researchers say, is to build nanofluidic neural networks where memristive units are wired together with water channels. The goal being to create circuits that can perform simple computing tasks such as pattern recognition or matrix multiplication. “We dream of building electrolytic computers able to compute with their electronic counterparts,” says Radenovic.

That’s a long-term and ambitious goal. But such an approach presents two key advantages over electronics. First, the systems would avoid the overheating typically associated with electrical wires, because they would use water as both the wires and the coolant. Second, they could benefit from using different ions to execute complete tasks on par with living organisms. Moreover, Agresti says, artificial neural networks with nanofluidic components promise lower energy consumption.

Yanbo Xie, a nanofluidics expert at Northwestern Polytechnical University in China, points out that the memristor is a critical component for a neuromorphic computer chip and plays a similar role to a transistor in a CPU. The EPFL logic circuit could be “a fundamental building block for future aqueous computing machines,” he says. Juan Bisquert an applied physicist at the University of James I in Castello, Spain, agrees. The devices “show a robust response,” he says, and combining them to implement a Boolean logic operation “paves the way for neuromorphic systems based on fully liquid circuits.”

The work is described in Nature Electronics.

Why we still need a CERN for climate change

It was a scorcher last year. Land and sea temperatures were up to 0.2 °C higher every single month in the second half of 2023, with these warm anomalies continuing into 2024. We know the world is warming, but the sudden heat spike had not been predicted. As NASA climate scientist Gavin Schmidt wrote in Nature recently: “It’s humbling and a bit worrying to admit that no year has confounded climate scientists’ predictive capabilities more than 2023 has.”

As Schmidt went on to explain, a spell of record-breaking warmth had been deemed “unlikely” despite 2023 being an El Niño year, where the relatively cool waters in the central and eastern equatorial Pacific Ocean are replaced with warmer waters. Trouble is, the complex interactions between atmospheric deep convection and equatorial modes of ocean variability, which lie behind El Niño, are poorly resolved in conventional climate models.

Our inability to simulate El Niño properly with current climate models (J. Climate 10.1175/JCLI-D-21-0648.1) is symptomatic of a much bigger problem. In 2011 I argued that contemporary climate models were not good enough to simulate the changing nature of weather extremes such as droughts, heat waves and floods (see “A CERN for climate change” March 2011 p13). With grid-point spacings typically around 100 km, these models provide a blurred, distorted vision of the future climate. For variables like rainfall, the systematic errors associated with such low spatial resolution are larger than the climate-change signals that the models attempt to predict.

Reliable climate models are vitally required so that societies can adapt to climate change, assess the urgency of reaching net-zero or implement geoengineering solutions if things get really bad. Yet how is it possible to adapt if we don’t know whether droughts, heat waves, storms or floods cause the greater threat? How do we assess the urgency of net-zero if models cannot simulate “tipping” points? How is it possible to agree on potential geoengineering solutions if it is not possible to reliably assess whether spraying aerosols in the stratosphere will weaken the monsoons or reduce the moisture supply to the tropical rainforests? Climate modellers have to take the issue of model inadequacy much more seriously if they wish to provide society with reliable actionable information about climate change.

I concluded in 2011 that we needed to develop global climate models with spatial resolution of around 1 km (with compatible temporal resolution) and the only way to achieve this is to pool human and computer resources to create one or more internationally federated institutes. In other words, we need a “CERN for climate change” – an effort inspired by the particle-physics facility near Geneva, which has become an emblem for international collaboration and progress.

That was 13 years ago and since then nature has spoken with a vengeance. We have seen unprecedented heat waves, storms and floods, so much so that the World Economic Forum rated “extreme weather” as the most likely global event to trigger an economic crisis in the coming years. As prominent climate scientist Michael Mann noted in 2021 following a devastating flood in Northern Europe: “The climate-change signal is emerging from the noise faster than the models predicted.” That view was backed by a briefing note from the Royal Society for the COP26 climate-change meeting held in Glasgow in 2021, which stated that the inability to simulate physical processes in fine detail accounts for “the most significant uncertainties in future climate, especially at the regional and local levels”.

Yet modelling improvements have not kept pace with the changing nature of these real-world extremes. While many national climate modelling centres have finally started work on high-resolution models, on current trends it will take until the second half of the century to reach kilometre-scale resolution. This will be too late for it be useful to tackle climate change (see figure below) and urgency is needed more than ever.

A climate EVE

Pooling human and computing resources internationally is a solution that seems obvious. In a review of UK science in 2023, the Nobel-prize winner Paul Nurse commented that “there are research areas of global strategic importance where new multi-nationally funded institutes or international research infrastructures could be contemplated, an obvious example being an institute of climate change built on the EMBL [European Molecular Biology Laboratory] model”. He added that “such institutes are powerful tools for multinational collaboration and bring great benefit not only internationally but also for the host nation”.

So, why hasn’t it happened? Some say that we don’t need more science and instead must spend money helping those that are already suffering from climate change. That is true, but computer models have helped vulnerable societies massively over the years. Before the 1980s, poorly forecast tropical cyclones could kill hundreds of thousands of people in vulnerable societies. Now, with improved model resolution, excellent week-ahead predictions (and the ability to communicate the forecasts) can be made and it is rare for more than a few tens of people to be killed by extreme weather.

Graph of the spatial resolution climate models decreasing over time

High-resolution climate models will help target billions of investment dollars to allow vulnerable societies to become resilient to regionally specific types of future extreme weather. Without this information, governments could squander vast amounts of money on maladaptation. Indeed, scientists from the global south already complain that they don’t have actionable information from contemporary models to make informed decisions.

Others say that different models are necessary so that when they all agree, we can be confident in their predictions. However, the current generation of climate models is not diverse at all. They all assume that critically important sub-grid climatic processes like deep convection, flow over orography and ocean mixing by mesoscale eddies can be parametrized by simple formulae. This assumption is false and is the origin of common systematic errors in contemporary models. It is better to represent model uncertainty with more scientifically sound methodologies.

A shift, however, could be on the horizon. Last year a climate-modelling summit was held in Berlin to kick-start the international project Earth Visualisation Engines (EVE). It aims to not only create high-resolution models but also foster collaboration between scientists from the global north and south to work together to obtain accurate, reliable and actionable climate information.

Like the EMBL, it is planned that EVE will comprise a series of highly interconnected nodes, each with dedicated exascale computing capability, serving all of global society. The funding for each node – about $300m per year – is small compared to the trillions of dollars of loss and damage that climate change will cause.

Hopefully, in another 13 years’ time EVE or something similar will be producing the reliable climate predictions that societies around the globe now desperately need. If not, then I fear it will be too late.

Looking for dark matter differently

Dark matter makes up about 85 percent of the universe’s total matter, and cosmologists believe it played a major role in the formation of galaxies. We know the location of this so-called galactic dark matter thanks to astronomical surveys that map how light from distant galaxies bends as it travels towards us. But so far, efforts to detect dark matter trapped within the Earth’s gravitational field have come up empty-handed, even though this type of dark matter – known as thermalized dark matter – should be present in greater quantities.

The problem is that thermalized dark matter travels much more slowly than galactic dark matter, meaning its energy may be too low for conventional instruments to detect. Physicists at the SLAC National Laboratory in the US have now proposed an alternative that involves searching for thermalized dark matter in an entirely new way, using quantum sensors made from superconducting quantum bits (qubits).

An entirely new approach

The idea for the new method came from SLAC’s Noah Kurinsky, who was working on re-designing transmon qubits as active sensors for photons and phonons. Transmon qubits needs to be cooled to temperatures near absolute zero (- 273 °C) before they become stable enough to store information, but even at these extremely low temperatures, energy often re-enters the system and disrupts the qubits’ quantum states. The unwanted energy is typically blamed on imperfect cooling apparatus or some source of heat in the environment, but it occurred to Kurinsky that it could have a much more interesting origin: “What if we actually have a perfectly cold system, and the reason we can’t cool it down effectively is because it’s constantly being bombarded by dark matter?”

While Kurinsky was pondering this novel possibility, his SLAC colleague Rebecca Leane was developing a new framework for calculating the expected density of dark matter inside Earth. According to these new calculations, which Leane performed with Anirban Das (now a postdoctoral researcher at Seoul National University, Korea), this local dark-matter density could be extremely high at the Earth’s surface – much higher than previously thought.

“Das and I had been discussing what possible low threshold devices could probe this high predicted dark matter density, but with little previous experience in this area, we turned to Kurinsky for vital input,” Leane explains. “Das then performed scattering calculations using new tools that allow the dark matter scattering rate to be calculated using the phonon (lattice vibration) structure of a given material.”

Low energy threshold

The researchers calculated that a quantum dark-matter sensor would activate at extremely low energies of just one thousandth of an electronvolt (1 meV). This threshold is much lower than that of any comparable dark matter detector, and it implies that a quantum dark-matter sensor could detect low-energy galactic dark matter as well as thermalized dark matter particles trapped around the Earth.

The researchers acknowledge that much work remains before such a detector ever sees the light of day. For one, they will have to identify the best material for making it. “We were looking at aluminium to start with, and that’s just because that’s probably the best characterized material that’s been used for detectors so far,” Leane says. “But it could turn out that for the sort of mass range we’re looking at, and the sort of detector we want to use, maybe there’s a better material.”

The researchers now aim to extend their results to a broader class of dark matter models. “On the experimental side, Kurinsky’s lab is testing the first round of purpose-built sensors that aim to build better models of quasiparticle generation, recombination and detection and study the thermalization dynamics of quasiparticles in qubits, something that is little understood,” Leane tells Physics World. “Quasiparticles in a superconductor seem to cool much less efficiently than previously thought, but as these dynamics are calibrated and modelled better, the results will become less uncertain and we may understand how to make more sensitive devices.”

The study is detailed in Physical Review Letters.

Copyright © 2024 by IOP Publishing Ltd and individual contributors