Skip to main content

A map of quantum physics, sound of a wave-function collapse, famous physicists as dogs

Above is a video by the physicist, YouTuber and quantum cartographer Dominic Walliman, who describes a map of quantum physics that he has created. The map introduces quantum mechanics by providing bite-sized chunks of information that are organized in a way that gives an overview of the quantum world and the academic disciplines that study it. Walliman then goes on to look at the map – and quantum mechanics – in much more detail.

“Could we hear the pop of a wave-function collapse, and if so, what would it sound like?” asks Antoine Tilloy of the Max Planck Institute for Quantum Optics in Germany. To find out why the “sound is disappointingly banal, indistinguishable from any other click,” read his preprint on arXiv: “The subtle sound of quantum jumps”.

The Internet and the World Wide Web were invented for very serious reasons of national defence and scientific communication – but now they are the realm of “Famous physicists imagined as dogs”. I’m not saying that’s necessarily a bad thing, so enjoy.

Magnetic topological insulators change phase under pressure

In 2019, researchers discovered that two intrinsically magnetic materials, MnBi2Te4 and MnBi2Te7, can behave as topological insulators – materials that are electrically insulating in bulk, but conduct electricity well on the surface. A team led by Yangpeng Qi from ShanghaiTech University has now demonstrated that it is possible to induce phase transitions in these layered heterostructures by applying high pressures to them, thereby tuning their crystal structure and electronic properties. The results could further our understanding of the interplay between magnetism and topology in these and related materials, they say.

In addition to their unusual conducting behaviour, topological insulators can also exhibit various exotic quantum effects, including the quantum anomalous Hall effect (QAHE) and quantum phases such as axion insulating states. However, many of these effects only appear when the materials are doped with magnetic impurities. The drawback to such doping is that it also introduces disorder in the material’s crystal lattice, making it difficult for researchers to study any quantum phenomena present.

Similar crystal structure to Bi2Te3

Intrinsically magnetic topological insulators such as MnBi2Te4 and MnBi2Te7 avoid this problem, crystallizing instead into an orderly, layered structure similar to that of Bi2Te3, which is a typical topological insulator at ambient conditions. In their work, Qi and colleagues applied pressures of up 50 GPa to MnBi2Te4 and MnBi2Te7 using a diamond anvil cell apparatus that squeezes a sample between the flattened tips of gem-grade diamond crystals. The small size of the tips (400 microns) make it possible to reach such high pressures with only small applied forces.

The researchers performed high-pressure resistivity measurements within this cell, as well as in situ high-pressure Raman spectroscopy and synchrotron X-ray diffraction. The latter experiments were performed at the Advanced Photon Source and the Shanghai Synchrotron Radiation Facility.

Antiferromagnetism gradually disappears as pressure increases

Qi and colleagues found that the antiferromagnetism normally displayed in MnBi2Te4 and MnBi2Te7 gradually disappears as pressure increases, while the conductance and crystal structure of the material also dramatically changes. However, through ab initio band structure calculations, the team discovered that applied pressure does not change the topological nature of the two systems until a structural phase transition occurs at a pressure of more than 16 GPa. They also observed that the two materials’ bulk and surface states respond differently to applied pressure, producing two types of macroscopic resistivity that are in competition with each other.

Since MnBi2Te4 and MnBi2Te7 are layered materials, the researchers expected that their electrical transport and magnetic properties would be sensitive to the competition between interlayer and intralayer interactions – which can indeed be tuned by adjusting external pressure. Their resistivity measurements at pressures up to 34 GPa revealed metal-like conduction in the low-pressure range (around 3 GPa), with a metal-semiconductor transition occurring at increased pressures of greater than about 12 GPa. They also noted that the materials’ resistivity undergoes a metallization at pressure above around 16 GPa and does not change significantly with further increases in pressure.

“Our combined theoretical and experimental research establishes MnBi2Te4 and MnBi4Te7 as highly tunable magnetic topological insulators, in which phase transitions and new electronic states emerge upon compression,” Qi tells Physics World. “We now plan to search for other layered topological materials that behave in the same way.”

The work is reported in Chinese Physics Letters.

‘Cartwheeling’ light reveals new type of polarized light-matter interaction

Technologies that rely on interactions between matter and polarized light usually stick to the well-understood effects of linear or circular polarization. Researchers at Rice University in the US have now opened the door to fresh approaches by studying how matter reacts to an additional form of polarization. This form, known as “trochoidal” polarization, is characterized by a “cartwheeling” motion in light’s electric field that can occur in either a clockwise or anticlockwise direction. Since matter can distinguish between these two directions, trochoidal dichroism could be used to develop novel spectroscopic tools.

Circularly-polarized light, in which the direction of the electromagnetic field rotates in a helical or “corkscrew-like” fashion as it propagates through space, is commonly used to study the conformation of small biomolecules such as proteins, DNA and amino acids. These studies are possible because such molecules are chiral – that is, their structures have a “handedness” that makes them absorb left- and right-circularly-polarized light differently, a phenomenon known as dichroism. Linear polarization is also widely used, for example to control reflections and glare in sunglasses.

The light polarization the Rice researchers studied is very different from these more familiar types. Rather than following a helical progression, the direction of the electromagnetic field in trochoidal light turns end-over-end as it propagates, rotating either clockwise or anticlockwise as it goes. “Rather like a rolling hula hoop,” explains study lead author Lauren McCarthy.

Measuring light scattered from plasmonic nanorods

The Rice researchers generated their trochoidal light waves by total internal reflection of linearly polarized light at an air-glass interface. This type of reflection produces a polarized evanescent wave – a light wave that exists only near a surface, unlike an ordinary, free light wave – and McCarthy and colleagues were able to tune its trochoidal properties by changing the polarization of the incident light.

The team then studied how pairs of particles known as gold plasmonic nanorods scattered this light. These particles are made from a metallic nanostructure that can be fine-tuned to absorb and scatter light of different frequencies thanks to the physics of plasmons – quasiparticles that arise when light interacts with the electrons in the metal and causes them to oscillate.

While trochoidal light polarization has been observed in previous experiments, McCarthy and colleagues are the first to use plasmonic nanoparticles to study how it interacts with matter. “Using a dark-field microscope and spectrometer, we found that clockwise and anticlockwise trochoidal polarizations interacted differently with pairs of plasmonic nanorods oriented at 90° from each other,” she tells Physics World. “Indeed, the wavelengths of light the nanorod pairs scattered changed when we switched the trochoidal polarizations from clockwise to anticlockwise, which is a well-understood indicator of dichroism.”

Spectroscopic signatures

Previous studies on trochoidal polarization have focused on properties such as the unique transverse spin angular momentum of the light’s electric field, which can enable light propagation in one direction, McCarthy adds. “Our study has instead looked at the spectroscopic signatures of matter irradiated with clockwise and anticlockwise light polarizations.”

Members of the team, who report their work in PNAS, say that trochoidal dichroism could aid the development of spectroscopic techniques that are complementary to established linear and circular dichroism spectroscopies. “Ultimately, dichroism is an optical indicator of molecular geometry,” McCarthy says. “Trochoidal polarizations, owing to their cartwheeling nature, would be ideally suited to probe cartwheeling charge motion in molecules, while circular dichroisms could probe helical charge motion.”

The researchers would now like to observe signatures of trochoidal dichroism in molecular systems, such as certain light-harvesting molecular antennas that feature charge motion that is both cartwheeling and helical. Such structures are expected to be particularly sensitive to trochoidal dichroism.

Artificial stupidity

Not so long ago, machine learning was a novelty in physics. Although physicists have always been quick to adopt new computing techniques, the early problems that this variant of artificial intelligence (AI) could solve – such as recognizing handwriting, identifying cats in photos and beating humans at backgammon – were not the sort of problems that got physicists out of bed in the morning. Consequently, the term “machine learning” did not appear on the Physics World website until 2009, and for several years afterwards it cropped up only sporadically.

In the mid-2010s, though, the situation began to change. Medical physicists discovered that training a machine to identify cats was not so different from training it to identify cancers. Materials scientists learned that AIs could optimize the properties of new alloys, not just backgammon scores. Before long, the rest of the physics community was finding reasons to join the machine-learning party. A quick scan of the Physics World archive shows that since 2017, physicists have used machine learning to, among other things, enhance optical storage, stabilize synchrotron beams, predict cancer evolution, improve femtosecond laser patterning, spot gravitational waves and find the best LED phosphor in a list of 100,000 compounds. Indeed, machine-learning applications have spread so rapidly that they are no longer remarkable. Soon, they may even be routine – a sure sign of a technology’s success.

All of which makes this an excellent time for physicists to pick up a copy of You Look Like a Thing and I Love You – a deft, informative and often screamingly funny primer on the ways that machine learning can (and often does) go wrong. And by “go wrong”, author Janelle Shane isn’t talking about machines becoming hyper-intelligent and overthrowing the puny humans who invented them. In fact, the opposite is true: most of AI’s problems arise not because algorithms are too intelligent, but because they are approximately as bright as an earthworm.

Shane is well placed to comment on algorithms’ daft behaviour. Since 2016 she has regularly entertained readers of her blog AI Weirdness by giving neural networks (a type of machine-learning algorithm) a body of text and asking them to produce new text in a similar style. Many useful AIs are “trained” in exactly this fashion, but Shane’s peculiar genius is to feed her algorithms cultural oddments such as paint colours, recipes and ice-cream flavours, rather than (say) crystallographic data. The results – a warm shade of pink the neural network dubs “Stanky Bean”; a recipe for “clam frosting”; and (my personal favourite) an ice-cream flavour called “Necrostar with Chocolate Person” – are eccentric and often endearing. The book’s title is another example: apparently, “you look like a thing and I love you” is an algorithm’s idea of a great pick-up line. Frankly, I’ve heard worse.

A book full of such slip-ups would be worth reading purely for comedic value, but these bloopers have points as well as punchlines. “Necrostar with Chocolate Person”, for example, is the logical result of taking an AI that knows how to generate names of heavy-metal bands and asking it to generate ice cream flavours instead. This process is known as transfer learning, and it’s a common shortcut for AI developers, saving hours of computing time that would otherwise be wasted in teaching the algorithm to generate words from scratch. Similarly, recipes for clam frosting exist in part because AIs don’t “understand” food like a human does, but also because they have short memories. Hence, by the time the algorithm reaches the last ingredient in a recipe, it’s long since forgotten the first. AI amnesia also explains why algorithms can (and do) generate sentence-by-sentence summaries of sporting matches, but book reviews are beyond them – at least for now.

The most fascinating – and serious – problem with AIs is not their stupidity or forgetfulness, though. It’s their propensity to behave in ways their creators never intended. Shane illustrates this with a chapter on training simulated robots to walk. The idea is to start with a robot that can barely wriggle, give it a “goal” and a mechanism that rewards progress, and allow the AI to evolve ever-more-efficient propulsion methods. But whereas real evolution has produced several viable means of locomotion, an AI given this challenge is extremely likely to cheat. A robot asked to move from A to B, for example, may grow to an equivalent height and fall over. If the human programmer rules out this class of “solutions”, the robot may evolve an awkward spinning gait or exploit flaws in the simulation’s physics to teleport itself to the new location. As Shane notes, “It’s as if the [walking] games were being played by very literal-minded toddlers.”

Robots that fall over are amusing. Algorithms that preferentially reject job applications from women are not

Robots that fall over are amusing. Algorithms that preferentially reject job applications from women are not. Both are real examples; both stem from AI’s love of shortcuts. Picking the best person for a job is hard, even (especially?) for a computer. However, if the AI has been trained on successful CVs from a company with a pre-existing diversity problem, it may find it can boost its chances of picking the “right” applicants if it rejects CVs that mention women’s colleges or girls’ soccer.

In some ways, AIs that learn undesirable lessons from biased data are merely the latest example of the old programming maxim “garbage in, garbage out”. But there is a twist. Unlike human-authored code, machine-learning algorithms are a black box. You can’t ask an AI why it thinks “Stanky Bean” is a desirable name for a paint colour, or why it treats a loan applicant from a majority-Black neighbourhood as a higher credit risk. It won’t be able to tell you. What you can do, though, is educate yourself about the ways machine learning can fail, and for this, You Look Like a Thing and I Love You is an excellent place to start. As Shane puts it, “When you ask an AI to draw a cat or write a joke, its mistakes are the same sorts of mistakes it makes when processing fingerprints or sorting medical images, except it’s glaringly obvious that something’s gone wrong when the cat has six legs and the joke has no punchline. Plus, it’s really hilarious.”

Bending hairs and compliant microstructures make razor blades dull

New insights into why a hard steel razor blade is dulled by cutting soft hairs have been gained by a trio of researchers at the Massachusetts Institute of Technology. Gianluca Roscioli, Seyedeh Mohadeseh Taheri-Mousavi and Cemal Cem Tasan did a series of experiments that recreate the shaving process and found that dulling is a result of microscopic variations in the structure of a blade and the angle at which a cut is made. The team suggests that nanoscale improvements to blades could boost the performance of cutting tools.

Most modern cutting edges are made of hardened steel, which is much harder than commonly cut materials such as hair and food. Indeed, the steel used to make razors is more than 50-times harder than human hair. As a result, scientists have long been puzzled exactly why such blades dull so rapidly after cutting seemingly soft materials.

It had long been assumed that sharp edges are dulled by basic wear mechanisms such as the cracking of brittle steel surfaces and the rounding of edges. However, little had been known about how these processes relate to the complex ways that blades and materials interact during the cutting process on a microscopic scale.

Roscioli and colleagues investigated this process by using an electron microscope to observe how commercial razor blades cut individual hairs and bunches of hair in real time. They found that two things affected how the blades were dulled by cutting the hair.

Perpendicular shear

When a razorblade slices through a hair that is fixed to the skin, the hair bends away from the blade – thus changing the cutting angle. At some angles, the blade is subject to a large shear force that is perpendicular to the sharp edge. The team believe this causes the deformation and chipping of the blade — because this damage was not observed in experiments where the cutting angle was controlled to eliminate perpendicular shear.

The second thing that appeared to affect the dulling process is microstructural variation along the edge of the blade. A steel blade is made of tiny microstructures, some of which are stiff and some of which are compliant. Normally, this composition is advantageous because it prevents the propagation of cracks. But the material can be damaged if a compliant region of the blade does the brunt of the cutting. In the razor blades studied by the team, the chances of this occurring depended both on the size of the microstructures and the position of the boundary between stiff and compliant regions within the cross section of a hair.

The team concludes that razor blades and other sharp tools could be improved by using more homogenous microstructures at the cutting edge. This could be achieved using nanostructured alloys, for example.

The research is described in Science.

Quantum dots track two-dimensional diffusion in cells

Quantum dots diffuse within living cells in a nearly two-dimensional fashion. This result, which was obtained using a new 3D microscopy technique that can track single particles, sheds fresh light on intracellular diffusion – a process that is critical for moving molecules around the cell and for mediating other important activities. According to study leader Hui Li, a biophysicist at the Chinese Academy of Sciences in Beijing and Beijing Normal University, the 2D motion he and his colleagues observed is robust and stems from the complex architectures of the flat “adherent” biological cells they studied.

Quantum dots make ideal probes for studying intracellular diffusion in living cells. They are similar in size to intracellular macromolecules and can be made to mimic biological materials relatively easily, by coating their surfaces with organic molecules. Previous studies, however, relied mainly on two-dimensional measurements of their movement, with the assumption that three-dimensional diffusion is an extension of 2D diffusion and is isotropic.

The new work reveals that diffusion is in fact highly anisotropic, thanks to the heterogenous architectures of various cell structures. From the quasi-2D diffusion behaviour they observed, Li and colleagues infer the presence of planar structures within the cytoplasm – the thick solution composed mainly of water, salt and proteins that fills each cell and is enclosed by the cell membrane. They also suggest that these planar structures provide a means of rapidly and efficiently transporting molecules by diffusion within the cells.

3D single-particle tracking microscopy

Li and colleagues obtained their result using an extension of a 2D single-particle tracking (SPT) instrument that they developed in 2015. Like its predecessor, the new 3D SPT relies on tracking single quantum dots in “adherent” cells – one of the most common types of biological cell, and the model of choice for cellular studies.

While the earlier method was only able to measure the positions of particles in the lateral (x and y directions), it nevertheless revealed a heterogenous and compartmentalized diffusion behaviour in the endoplasmic reticulum – a cellular structure that plays a major role in protein synthesis, folding and transport. It could not, however, show how the dots actually diffused through the cytoplasm in all three directions in space. The new method overcomes this shortcoming because it can measure the axial (z-direction) positions of the particles as well as their lateral ones – with a resolution as small as 35 nm.

Joining up the dots

To improve on their 2D SPT microscope, the Beijing researchers had to construct two additional components for it: a focus-locking apparatus to eliminate vertical drift; and a two-focal imaging apparatus to measure axial positions of particles from off-focused diffraction rings.

In their experiments, the researchers loaded their quantum dots into the cytoplasm of cultured human (A549) cancer cells. Once they localized the particles using their new 3D SPT approach, they then “joined up” the dots to construct their trajectory through the cytoplasm. This technique allowed them to analyse trajectories in terms of motion type (that is, Browning motion, sub-diffusion or confined motion), and also yielded information on parameters such as diffusion rates.

“Intracellular diffusion is critical for molecule translocation in cells and mediates many important cellular processes,” Li tells Physics World. “Our finding suggests that cells may utilize the architecture of the cytoplasm to control the intracellular diffusion dynamics and regulate macromolecule transport.” Indeed, the quasi-2D diffusion of the dots with constrained movement in the axial direction appears to effectively promote molecular translocation and shorten particle diffusion times.

Members of the Beijing team, who report their work in Chinese Physical Letters, now plan to combine their 3D SPT technique with dynamical subcellular imaging to better explore the relationship between the quasi-2D diffusion they have observed and cell architecture. “We are also investigating intracellular diffusion in fast-migrating cells to further elucidate the physical mechanisms behind cell diffusion,” Li says. “This is our most exciting project right now.”

Creating new technologies using 2D materials, supernova wreaked havoc on Earth, quantum go versus AI

Atomically thin 2D materials such as graphene have unique electronic and mechanical properties that could revolutionize how electronics are manufactured and used. Foldable radios and graphene tattoos that monitor blood pressure are just two of the examples given by Deji Akinwande in a conversation with Physics World’s Margaret Harris. Akinwande is professor of electrical and computer engineering at the University of Texas at Austin and he also talks about how 2D materials could be used to make better memories and switches for mobile phones.

Did the explosion of a nearby star cause a huge extinction event on Earth 259 million years ago? The science writer Logan Chipkin joins us to talk about new claims from astronomers – and explains how a connection between a supernova and the “Hangenberg Crisis” could be verified.

Finally, we chat about how physicists in China have built a quantum version of the ancient board game go – using entangled photons. While their invention is designed to challenge artificial intelligence systems, it would also change how humans play the game.

Portable sensor detects biomagnetic signals in noisy outdoor environments

A portable sensor that can detect tiny biomagnetic signals from the brain and heart – without the expensive magnetic shielding needed by current magnetoencephalographic techniques – has been developed by researchers in the US. The low-cost set-up, which is small enough to fit in a backpack, can operate successfully even near to power lines and a railway, and could find application in field triage, brain–machine interfaces and even precise magnetic navigation.

Magnetometry in the form of magneto- and electroencephalography (MEG and EEG) can provide vital insight into human brain and cardiac function, with resolutions that exceed alternative techniques like functional MRI and positron-emission tomography (PET). Commercial MEG systems, however, are expensive to operate and come with a sizeable footprint – requiring both large-scale magnetic shielding and cryogenic cooling systems – which also limits the activities that they can be used to study.

In their new study, however, physicist Mark Limes of Twinleaf and colleagues from Princeton University have focused on optically pumped atomic magnetometers to measure the biomagnetic fields. These use lasers to make the atoms in alkali-metal vapours spin-coherent — and subsequently to measure how they are perturbed by the presence of magnetic fields, with the bulk magnetization precessing around the field of interest.

A key part of enabling the sensor to work in less controlled environments, Limes explains, lies in this free-precession measurement. “Many magnetic sensors are resonant systems, whether all-optical or requiring some magnetic field manipulation such as applied radio-frequency,” he says. “Any resonant system out in a noisy environment has feedback loops that track the magnetic field and are usually bandwidth-limited in a significant way.”

Issues arise because magnetic field feedback can only be controlled so well — meaning that techniques that should theoretically work well in Earth-sized magnetic fields end up, in practice, only working viably in highly controlled, magnetically shielded environments. “We don’t need to track the magnetic field or provide feedback, as the crux of our measurement is passively watching the atoms respond to the total magnetic field,” Limes explains.

Still, previous atomic magnetometers could also only operate effectively in shielded environments. The second part of the new design involves coupling two of the sensors together to form a magnetic gradiometer. “You can have a really good magnetometer, but in the frequency range that brain and heart signals are in, the magnetometer will be totally swamped by ambient magnetic noise from power lines, transformers or even the local train rail,” Limes tells Physics World. The “impressive leap” of the present work towards practical applications like medical imaging, he adds, comes from getting two sensors to work together, so that they “reject distant common-mode magnetic field sources that serve as background noise, while retaining sensitivity to nearby biomagnetic sources.”

In field tests, the prototype gradiometer device was able to pick up brain activity in a test subject’s audio cortex in response to randomized sound stimuli — despite the experiments being conducted in an unshielded, natural environment both within 750 m of a main railway line and within just 75 m of electrical power lines.

“Our sensor is really unprecedented, and opens up some new regimes and applications, including handheld or wearable alkali-based sensors,” Limes concludes.

“Measuring biomagnetic signals originating from either the human heart or brain is an arduous task, requiring magnetic sensors with exceptional sensitivity,” comments Nathanial Wilson, a physicist from The University of Adelaide who was not involved the present study. He adds: “This work effectively bridges the gap between tabletop experiments and real-world applications, and paves the way towards lost-cost, portable devices for diagnostic use in a clinical setting.”

“This is a very significant step forward as this makes femtotesla-level magnetometry practical,” adds Dmitry Budker, a physicist at Johannes Gutenberg University, who was also not involved in the present study. “Scientists can now turn their focus from magnetometer development to research on real-world applications.”

With their initial study complete, the researchers are now looking to improve the accuracy and sensitivity of their sensors – so that they might be able to compete with existing cryogenic superconducting quantum interference device (SQUID) MEG detectors – after which, they intend to build an array of sensors to demonstrate that the systems can perform the source localization necessary for MEG studies.

The research is described in Physical Review Applied.

COVID-19: an economic perspective

J Doyne Farmer

As an American living in the UK, how do you think the US has handled the economic response to the pandemic?

I can’t help but wonder what drugs US stock market investors are taking. Stock prices have been ridiculously high since Donald Trump was elected and now we are seeing the consequences of that coming to roost with his complete mismanagement of the pandemic. I think the long-range economic situation is worse than the stock market seems to think it is. There has been no co-ordination in the management of the pandemic at a national level and it remains unclear when things will stabilize. The US has not frozen the economy by keeping workers in place, as most European countries have done.

What are the consequences of this?

There is a dangerous “Keynesian” feedback loop running. If people become unemployed then they don’t spend. If they don’t spend, products don’t sell and more people become unemployed. To prevent this from happening again the US government needs to take firm action. While there have been large stimulus packages, I don’t think they are large enough and they have been implemented in a very inefficient way.

You recently presented a new model of the economic impact of the current COVID-19 crisis. Were there any unexpected results?

We worked together with IHS Markit – a leading data provider – to build a model that could predict the impact of the crisis on different industries. We saw some surprising network effects. For example, we discovered that opening more industries can actually make things worse if the wrong industries are opened (arXiv:2005.10585). This problem happens because during the lockdown period many industries were not able to produce at full capacity. If the demand for the products of an industry is more than it can supply, it becomes necessary to ration its output

How might this happen?

Suppose industry A produces a critical input for industry B, and industry B produces critical inputs for other industries as well. Then if industry C is opened, and industry C uses the product of industry A as well, industry A may now supply even less of its product to industry B. And if industry B supplies products to many different industries, this can reduce the output of the whole economy. So in any partial reopening of the economy, it’s important to open the right industries. Our model allowed us to investigate questions like this and make detailed recommendations. We are now waiting for government reports and other data that will allow us to test our results and find out how we did.

What can be done to minimize risks of reopening certain parts of the economy that harbour more epidemiological risk?

Let me begin by saying that we aren’t epidemiologists. We were able to use detailed data about the tasks performed by occupations and we knew which occupations are employed in each industry. Combining this with epidemiological studies allowed us to estimate how much the infection rate was likely to go up by opening a given industry. Broadly speaking, we predicted that the increase in the infection rate from industries that supply other industries is substantially smaller than it is for consumer-facing industries. Our recommendation was to open business-to-business industries, but restrict consumer-facing industries. This was roughly in line with what the government did in the beginning.

My hope is that the pandemic makes inequality so intolerable that people will rise up, take action and implement some lasting changes.

But you are not supportive of the UK government reopening higher-risk consumer-facing industries such as pubs as it did last month?

I’m fairly sceptical. The UK has not got the epidemic under control in the same way that most European countries have. It will make a huge difference if we really have an aggressive track-and-trace system with free and widely available testing. But so far we don’t have that.

Do schools present a similar risk as reopening consumer-facing industries?

The effect of opening schools is difficult to predict and it depends on how it is done. Many schools are coming up with creative ways to partially reopen, like organizing students into bubbles or having the students quarantine at regular intervals. Of course, because so many people depend on schools for childcare, this cannot be separated from reopening the economy.

Do you think this crisis is going to increase the disparity between the rich and the poor?

In the short term, the pandemic is definitely making inequality even worse than it already is. We saw this in our first paper, where we predicted who could work from home and who couldn’t, and who is in essential and non-essential industries. The conclusion is that the poor are bearing the brunt of the pandemic because they can’t work from home and they’re less likely to work in essential industries. So in the short term the pandemic is clearly making things worse.

And in the long term?

It’s putting pressure on the system, it’s making people more aware and it may be a catalyst for change. There are a lot of changes we can make to fix the problem. We need to tax the rich more, we need to give workers more power to organize and we need a higher minimum wage. My hope is that the pandemic makes inequality so intolerable that people will rise up, take action and implement some lasting changes.

Experiments pin down conditions that make hot water freeze before cold

In 1963, a Tanzanian schoolboy called Erasto Mpemba was making ice cream when he noticed something strange: hot water sometimes freezes faster than cold water. Though Mpemba was not the first to wonder about this phenomenon, his report nevertheless captured the scientific imagination. The so-called “Mpemba effect” has remained contentious ever since – not least because the complex matrix of interactions at work when freezing a cup of hot water, coupled with water’s many anomalies, make it difficult to reproduce.

Researchers at Simon Fraser University in Canada have now overcome this problem with a simplified experimental model of a hot system relaxing to equilibrium with a colder heat bath. According to team leader John Bechhoefer, the technique he developed with PhD student Avinash Kumar enabled them to replicate the Mpemba effect in a reliable way, making it possible to pin down the precise conditions it requires.

A simplified model system

The definition of the Mpemba effect has been hotly debated. Is it concerned with the time it takes for water in a container to start to freeze, completely freeze (both of which are hard to observe accurately in practice) or simply to reach freezing temperatures? To complicate matters further, a similar effect has also been observed in magnetic systems, for other phase transitions. However, these systems, although simpler than water, are still too complex to pin down precise parameters for the effect.

Bechhoefer and Kumar’s experiments start with a beaker of water, but they don’t change the water’s temperature in real terms. Instead, they release a tiny glass bead in the beaker thousands of times from different positions across the width of a sample that defines the effective container size for their experiment. The locations where they release the bead are determined according to a probability distribution defined by thermodynamics and Boltzmann statistics for the system’s chosen initial “temperature”.

As the bead falls it is bombarded by water molecules, resulting in Brownian motion, but the researchers also subject it to a “virtual potential profile” using a feedback optical tweezer system. This changes the probability distribution of the bead’s position – effectively creating a change in the system’s “temperature”.  This virtual potential profile has two dips in it, in line with the double potential well of water’s free energy landscape: one dip where water can “supercool” to liquid water at subzero temperatures, and a lower dip where water freezes. By measuring the probability distribution of the released beads’ positions after a set time, and establishing how much that distribution differs from an equilibrium probability distribution, the researchers gain information about how quickly the system is equilibrating – an analogue to how quickly a cup of water would approach freezing temperatures.

While all this may sound more complicated than popping a cup of water in an ice box, Bechoefer says it actually makes it a lot simpler to determine what conditions are required to produce the Mpemba effect. The idea arose out of a visit to the University of Maryland, where Zhiyue Lu and Oren Raz were then working on a theoretical framework for a simplified model of the Mpemba effect. During the visit, Lu and Raz button-holed Bechhoefer and persuaded him that his specialist feedback optical tweezer system – which allows finer, non-diffraction-limited control over the virtual potential landscape than regular optical tweezers – would make a great way of testing their model. Thanks to this system, Bechhoefer and Kumar could work with a sample that had an effective width of just 0.4 µm, making experimental runs much faster.

Reliably reproducing the Mpemba effect

Bechhoefer acknowledges that his team’s system is an “abstract” and “almost geometrical” way of picturing the Mpemba effect. Nevertheless, he and Kumar were able to identify parameters for which hotter “initial temperatures” cooled faster than chillier ones. “It sort of suggested that all the peculiarities of water and ice – all the things that made the original effect so hard to study – might be in a way peripheral,” he says.

Although the double potential well played a crucial role in producing the effect, Bechhoefer and Kumar found this alone was not enough to trigger it. The system also needed the barrier between the two potential wells to be offset from the midpoint between them. When the distance to the deeper well is greater than the distance to the shallower well, the researchers found that the number of starting positions for which the bead will fall directly into the deeper well (rather than entering the shallower well first and then jiggling around until Brownian motion eventually nudges it into the deeper one) is greater.

While the Mpemba effect is not typical behaviour, Bechhoefer and Kumar’s study suggests that it is not limited to specialized conditions that would make it impossible to reproduce in a reliable way. “What this shows is that there are systems where you can reproducibly not just observe, but in some sense create, engineer and control the effect,” Bechhoefer says.

Controlling the Mpemba effect could have important practical implications too, for example in the heat removal systems that keep electronics cool. Lu and Raz’s theoretical treatment also suggests that there should be a “reverse Mpemba effect” for heating systems, and Bechhoefer and Kumar have their sights set on replicating this in future experiments.

“This work extends the Mpemba phenomenon,” says Changqing Sun, whose research at Singapore’s Nanyang Technological University focuses on properties of water that relate to the Mpemba effect. However, while Sun asserts that the work “certainly contributes to the knowledge for general situations of heat removal” he thinks further caution may be needed when applying it to water. In water, he adds, “the microscopic mechanism may differ, such as the higher thermal diffusivity and lower specific heat of water surface and the manner of hydrogen bond (O:H-O) energy exchange.”

Full details are reported in Nature.

Copyright © 2025 by IOP Publishing Ltd and individual contributors