Skip to main content

Towards a vaccine for Alzheimer’s disease

Researchers at Cinvestav in Mexico have produced a promising vaccine candidate against Alzheimer’s disease by targeting the amyloid-beta (Aβ) peptide. The vaccine, created by a team led by Miguel Angel Gómez Lin, used an Aβ epitope – a molecule-specific receptor – that is known to be present in the full-length amyloid-beta peptide, as well as in its truncated version, which is often found in the brains of Alzheimer’s patients. After immunization, Aβ-specific antibodies from the mice were isolated and found to effectively bind to amyloid plaques in both mice and human brains (Inflammopharmacol doi: 10.1007/s10787-017-0408-2).

Designing the best vaccine
Virus-like particles (VLPs) are an exciting approach to vaccines because of their natural immunogenic properties. They also have an improved safety profile compared with vaccines based on live or attenuated virus. VLPs self-assemble from many copies of a protein and can present antigens on their surface at high density. Lim’s team decided to use the HPV (human-papilloma virus) VLP: a well-characterized platform that is able to support the genetic insertion of foreign epitopes.

After constructing three-dimensional molecular models of the assembled VLP (formed from 72 pentameric HPV protein shells), the researchers found two promising areas to insert their Aβ epitope. Choosing the place for genetic insertion on a VLP is not trivial since the inserted epitope needs to both face the outside of the nanocage to facilitate binding and not interfere with VLP formation.

The team chose the epitope inserted in the VLP (epitope Aβ 11-28) because it is present in both full-length amyloid-beta peptide and in the truncated/modified versions of peptides that are pathogenic. Although there are different proposed mechanisms of Alzheimer’s disease, the amyloid peptide is hypothesized to cause toxicity in the brain by forming large, insoluble plaques between neurons.

A promising proof-of-principle
After producing the VLP vaccine in plants, the researchers assessed the vaccine by immunizing mice. They tested samples from the mice for the presence of anti-amyloid-β antibodies at the end of a 10-week immunization period.

The results speak for themselves: not only did the mice serum contain antibodies that were able to recognize both the full-length and truncated amyloid peptide, but the antibodies were also able to bind to amyloid plaques in Alzheimer’s disease brain samples from mice and humans.

Taken together, these results are an important step towards creating an immunogenic therapy for Alzheimer’s disease. The VLP-based vaccine developed by the Mexico-based research centre was not only safe and easy to produce, but also showed good immunogenicity and the ability to target different amyloid peptides and amyloid aggregates.

Although no vaccine against Alzheimer’s exists yet, research results like this hopefully bring us one step closer.

Star Wars fact or fiction, Wikipedia editor in space, stellarator tour

By Hamish Johnston

What is it about Star Wars that captivates the imaginations of physicists? Earlier this week Carsten Welsch, who is head of physics at the University of Liverpool and head of communication for the nearby Cockcroft Institute, gave a presentation called “Physics of Star Wars” to an audience of hundreds of secondary school children, undergraduate and PhD students and university staff.

“I selected iconic scenes from the movies that everybody will immediately recognize, and used real-world physics to explain what is possible and what is fiction,” says Welsch. “For example, a lightsaber, as shown in the film, wouldn’t be possible according to the laws of physics, but there are many exciting applications that are possible, such as laser knives for high-precision surgery controlled by robot arms and adaptive manufacturing using lasers for creating complex structures in metals.”

You can watch a video of R2D2 negotiating a maze at the event here.

Staying in space, a few weeks ago we mentioned Italian astronaut Paolo Nespoli and the video he took of a fireball falling through Earth’s atmosphere. Nespoli is still up on the International Space Station and this week he has become the first person to contribute to Wikipedia from space. He recorded two audio messages, one in English and one in Italian, describing his exploits in space and uploaded them to the online encyclopaedia. You can listen to the English message here.

Finally, back here on Earth you can take a virtual tour of a research facility that recreates conditions in the Sun – the Wendelstein 7-X stellarator fusion research device at Greifswald, Germany. You can view a 360° panorama of the stellarator and also take an annotated tour of the facility.

A day in the life

When I was 10, I asked my mother if I could take my toy wheelbarrow full of assorted toys from home to school. We had been studying simple machines in our “general science” class, and I saw examples of them everywhere I looked: several of the toys I grew up with were perfect for understanding concepts I was encountering in a formal way for the first time. Now, unless you are a particularly enthusiastic 10-year-old, even the most avid physics enthusiast does not necessarily awaken each morning and turn their mind to the various physical processes they will encounter over the course of their day. Yet this is precisely the sort of journey that author James Kakalios takes us on in his book The Physics of Everyday Things.

Kakalios talks directly to you, the reader, as he puts you in the shoes of an imagined North-American protagonist going through a typical day. The narrative device – a second-person story constrained to a single day – serves the author well in delivering an enjoyable introduction to the laws of physics, using technology and experiences the reader might be familiar with. Your day (and the book) begins with the morning alarm, and Kakalios uses the subject of timekeeping to segue into a deeper conversation about frequencies and electricity generation. The stage is set, and the reader knows what to expect through the rest of the book.

Using examples of everyday technology, from toasters to aeroplanes, Kakalios introduces the reader to concepts such as energy conversion and conservation laws, as well as the physics principles behind aviation and medical devices, with occasional nods to chemical processes, which slightly expands the title and scope of the book. He also uses helpful analogies to tie macro phenomena with micro ones. I particularly enjoyed the way Kakalios used traffic jams to explain the behaviour of molecules in gases and liquids. He then used these molecules to explain why traffic flows the way it does: a lower density of vehicles on the road is akin to a very dilute gas, Kakalios points out, while at higher densities the flow of traffic resembles collective phenomena such as waves on the sea. When he remarks, “Physics says that traffic would be forever smooth and easy, if only we could get rid of drivers,” you might find yourself nodding in agreement, awaiting the day self-driving cars will rescue us from bumper-to-bumper commutes.

The writing is clear and inviting, allowing you to immerse yourself in the world created by the author. The text alternates between narration and exposition without distracting interruptions, though sometimes it can read a little like a textbook. And even though the book is US-centric it is not jarringly so; readers from elsewhere in the world will not struggle to identify with the book’s protagonist. The more you read, the more you are drawn into the world Kakalios paints, and you begin to ask yourself what concepts of physics he has chosen not to discuss with you in a particular context.

Kakalios does a commendable job of recognizing the ways in which physics manifests itself in seemingly mundane objects and injects his own enthusiasm for the subject into his writing. And even when he addresses subject matter that appears to be well-trodden territory, he is able to bring a fresh perspective, as he does when discussing the hypothetical of our favourite and familiar frictionless pendulum, and how it relates to the production of electromagnetic waves. But beyond such idealized examples, much of what might seem mundane today certainly felt like science fiction not too long ago. So Kakalios allows himself the indulgence of addressing both the “why” and the “why not” of speculative science fiction through the lens of the DeLorean time machine of the Back to the Future movies.

The book draws to a close as your day does, and all talk of cycles in nature manifests itself in the protagonist returning to the clock, to set an alarm for the next day. Although I did not have access 20 years ago to a lot of the technology discussed in this book, 10-year-old me would have pored over every little detail in the book and dug up the corresponding entries in an encyclopaedia. This is something to bear in mind: this book strikes a good balance between covering the basics and delving into detailed descriptions, but is by no means comprehensive and is certainly not meant to replace formal textbooks on the subject.

The Physics of Everyday Things is a welcome addition to any bookshelf: the engaging writing style is perfect for the casual physics enthusiast and the examples discussed will prove valuable to those who discuss physics with non-specialists. Perhaps we could all benefit from giving a little thought to the wonderful world of physics that surrounds us and look for everyday instances that evoke that sense of wonder.

  • James Kakalios The Physics of Everyday Things: the Extraordinary Science Behind an Ordinary Day 2017 Crown Publishing Group 256pp £19.82hb

Rising stars: Sadie Witkowski and Michael Graw

The penultimate interview in our series is with Sadie Witkowski, a psychology graduate student at Northwestern University. Having developed a wide-ranging interest in science when growing up, Sadie went on to double major in psychology and is now doing a PhD studying the brain and sleep processes. Sadie believes there is a dichotomy in the way the public perceive science – between the expert celebrity presenters such as Neil DeGrasse Tyson and the uncommunicative intellectuals in their ivory towers. Sadie believes that the vast majority of practising scientists fall somewhere in-between those extremes and there is value in making that clear in order to humanize science.

Our final interview of the series is with Michael Graw, who is studying for a PhD in oceanography at Oregon State University. Michael believes that one of the big challenges for early-career researchers is securing funding for research when you are competing with far more experienced scientists. As with many of the other delegates from ComSciCon, Michael hopes to pursue a career in science communication after finishing his PhD. He hopes to use photography and videography to broadcast the exciting developments in his field with wider audiences.

On Monday we published interviews with materials scientist Grayson Doucette and molecular biologist Khady Sall. On Tuesday we featured physics educationalist Reggie Bain and ecologist Shannon Bayliss. On Wednesday we profiled astrophysicist Chani Nava and quantitative ecologist Will Chen. Then yesterday we published interviews with tuberculosis researcher Katie Wu and neuroscientist Anzar Abbas.

To hear more voices on the state of science in the US, take a look at the free-to-read Physics World special report on physics in the US. Share your thoughts on the current state of physics in the US by posting a comment below or joining the conversation on Twitter including our handle @PhysicsWorld.

Theorists identify stable tetraquark

Two independent groups of theorists have predicted the existence of a stable “tetraquark” containing two heavy (bottom) quarks and two light antiquarks. They say that the particle could be detected in a few years’ time at the LHCb experiment on the Large Hadron Collider at CERN.

Quarks were proposed by Murray Gell-Mann and George Zweig in 1964 as the fundamental building blocks of protons, neutrons and other baryons, which contain three quarks, and mesons, made up of a quark and an antiquark. Since then physicists have also studied the possibility of more exotic composites, including tetraquarks, composed of two quarks and two antiquarks, and pentaquarks, which comprise four quarks and an antiquark.

A number of particles resembling tetraquarks have been observed at colliders over the past decade or so, such as the X(3872) particle first detected by the Belle experiment in Japan in 2003. More recently in 2016, the discovery of a tetraquark was claimed by physicists working at Fermilab. However, researchers working at LHCb were not able to confirm the Fermilab result.

Chris Quigg of Fermilab says that no definitive discovery has yet been made because it has never been entirely clear whether the detected particle debris – taken to be the components of tetraquarks – are created together or separately in the collisions. “In the case of tetraquarks, people can always put forward alternative explanations,” he says.

Heavy and light

Now, Quigg and his Fermilab colleague Estia Eichten on the one hand, and Marek Karliner of Tel Aviv University together with Jonathan Rosner of the University of Chicago on the other, say they have proved the existence of a tetraquark made from two of the heaviest quarks (bottom quarks) – and the two lightest antiquarks, an antiup and an antidown. The quark notation for the particle is bbu d.

Karliner and Rosner rely more heavily on experimental data in their analysis, particularly those from the discovery at LHCb in July of the Ξ++cc particle, which consists of two (heavy) charm quarks and an up quark. The pair had predicted back in 2014 that Ξ++cc should have a relatively low mass owing to the charm quarks attracting each other very strongly and thereby lowering their binding energy. In fact, the researchers reckoned that the charms’ binding energy ought to be half that of a charm-anticharm pair in mesons such as the J/psi particle.

Their reasoning was vindicated when the observed mass of Ξ++cc – 3621 +/-1 MeV/C2 – turned out to match the predicted value – 3627 +/-12 MeV/C2. In the latest work, they extrapolate their approach to a system containing two bottom quarks, rather than two charm quarks, and posit that the binding energy of the bb pair in bbu d is half that of a b b pair, which, they say, is well known from the masses of mesons such as the upsilon. By then incorporating the known mass of a u d, they work out that the tetraquark ought to weigh in at 10389 MeV/C2, give or take 12 MeV/C2.

Very stable

The researchers point out that this value is significantly lower – by some 215 MeV/C2 – than that of the lightest combination of known baryons and mesons with the correct properties. As such, they say, this tetraquark will be stable under the strong interaction, the force that holds protons and neutrons together. It will instead only decay via the weak interaction, for which typical decay times – a relative eternity at 10-13 s – are about ten orders of magnitude longer than that of the strong interaction. “This hadron is as stable as ordinary baryons and mesons that only decay through the weak interaction,” says Karliner.

Quigg and Eichten reach a similar conclusion but starting from first principles. They consider an idealised case in which two infinitely heavy quarks combine with two light antiquarks, and they find that the resulting particle should be stable and only decay via the weak interaction. Such a particle, says Quigg, can be thought of as a helium nucleus – the (very small) pair of quarks – with two electrons orbiting it.

To apply their finding to the real world, the pair then made “controlled approximations” in which they substituted the infinitely heavy quarks with ones having a large but finite mass. Relying (in the absence of the necessary experimental data) on Karliner and Rosner’s calculation of doubly heavy baryon masses, they calculated the properties of specific tetraquarks. They too conclude that those made from bb pairs should be stable, while others containing charm quarks should break down into pairs of mesons.

Reassuring conclusion

Karliner finds it “very reassuring” that the two studies reach essentially the same conclusion, even if, as he points out, “there is some difference in the specific numbers”. He is also optimistic that the bbu d particle can be discovered experimentally, estimating that this should occur about two to three years after LHCb has had its effective luminosity boosted by a factor of five – an upgrade expected to take place in 2021.

Tim Gershon of the University of Warwick, who is UK spokesperson for LHCb, is a little more cautious, arguing that “it will take some time” to understand the sensitivity of current experiments to bbu d. He says that the tetraquark’s weak decay will generate “a striking experimental signature” but that the signature will only emerge after vast numbers of collisions have been analysed. Nevertheless, he adds, “we can overcome very small probabilities with very large data samples”, noting that LHCb may get a second upgrade in the 2030s to make best use of the high-luminosity LHC. “I tend to think that the prospects are reasonably good in the long term,” he says.

The research is described in two papers in Physical Review Letters.

Gravitational effect reveals earthquake magnitude

Disturbances in the Earth’s gravitational field caused by the 2011 Tohoku earthquake have been spotted in data recorded at the time by a network of seismometers spread throughout East Asia. The signal was identified by Martin Vallée and collaborators at Sorbonne Paris Cité and Commissariat à l’Energie Atomique et aux Energies Alternatives in France, and California Institute of Technology in the US. The analysis represents a faster, more accurate way than conventional methods of estimating the magnitude of large earthquakes.

Arrival times

Usually, the first physical indication of a distant earthquake is received in the form of elastic P-waves, which travel from the rupture site to the seismometer along arc-shaped paths through the crust and upper mantle. These pressure waves typically propagate at 6–10 km/s, meaning that for seismic stations more than 1000 km from the epicentre, several minutes can elapse between the earthquake and the arrival of the first direct seismic signal.

Large earthquakes can rearrange the Earth’s mass in such a way as to be detectable more immediately in perturbations to the gravitational field, however. As P-waves spread out from the ruptured fault, the solid medium is alternately squeezed and stretched, causing transient changes in rock density. Far beyond the primary seismic wavefront, these gravitational effects can trigger secondary seismic waves that can be picked up by seismometers before the direct waves arrive.

The ground accelerations caused by the gravitationally induced seismic waves in the data studied by Vallée and colleagues were on the order of 1–2 nm/s2 – smaller than the subsequent P-waves by a factor of more than 105. But it was not only the size of the disturbance that made its detection challenging. In the earliest moments after the fault slipped, the direct and induced effects of the gravitational perturbation cancelled out, and an identifiable signal only became apparent about 60 seconds after the event. This meant that the gravitational effect was most easily observable in the traces from stations located between 1000 and 1500 km from the epicentre, where the P-wave delay was long enough for the signal to emerge before being overwhelmed.

Minimum magnitude

The researchers simulated the effects of earthquakes of different sizes on the data, and found that the immediate gravitational signal recorded by stations about 1300 km from the epicentre set a lower limit of magnitude 9 for the Tohoku event. At the time, however, the difficulty of judging magnitude based on the instrumental peak amplitudes measured at nearby stations meant that the size of the earthquake was underestimated.

The research is published in Science

Disturbances in the Earth’s gravitational field caused by the 2011 Tohoku earthquake have been spotted in data recorded at the time by a network of seismometers spread throughout East Asia. The signal was identified by Martin Vallée and collaborators at Sorbonne Paris Cité and Commissariat à l’Energie Atomique et aux Energies Alternatives in France, and California Institute of Technology in the US. The analysis represents a faster, more accurate way than conventional methods of estimating the magnitude of large earthquakes.

Arrival times

Usually, the first physical indication of a distant earthquake is received in the form of elastic P-waves, which travel from the rupture site to the seismometer along arc-shaped paths through the crust and upper mantle. These pressure waves typically propagate at 6–10 km/s, meaning that for seismic stations more than 1000 km from the epicentre, several minutes can elapse between the earthquake and the arrival of the first direct seismic signal.

Large earthquakes can rearrange the Earth’s mass in such a way as to be detectable more immediately in perturbations to the gravitational field, however. As P-waves spread out from the ruptured fault, the solid medium is alternately squeezed and stretched, causing transient changes in rock density. Far beyond the primary seismic wavefront, these gravitational effects can trigger secondary seismic waves that can be picked up by seismometers before the direct waves arrive.

The ground accelerations caused by the gravitationally induced seismic waves in the data studied by Vallée and colleagues were on the order of 1–2 nm/s2 – smaller than the subsequent P-waves by a factor of more than 105. But it was not only the size of the disturbance that made its detection challenging. In the earliest moments after the fault slipped, the direct and induced effects of the gravitational perturbation cancelled out, and an identifiable signal only became apparent about 60 seconds after the event. This meant that the gravitational effect was most easily observable in the traces from stations located between 1000 and 1500 km from the epicentre, where the P-wave delay was long enough for the signal to emerge before being overwhelmed.

Minimum magnitude

The researchers simulated the effects of earthquakes of different sizes on the data, and found that the immediate gravitational signal recorded by stations about 1300 km from the epicentre set a lower limit of magnitude 9 for the Tohoku event. At the time, however, the difficulty of judging magnitude based on the instrumental peak amplitudes measured at nearby stations meant that the size of the earthquake was underestimated.

The research is published in Science

When cold warms faster than hot

The food’s ready. The drinks are in the fridge. You’re all set for a fabulous festive party. Damn! You’ve got no ice cubes and the guests are due in a couple of hours. You sprint to your local convenience store, but it’s clean out of party bags of ice cubes. Not to panic: you’re a physicist and have heard of the “Mpemba effect” – that hot water freezes faster than lukewarm or cool water. So you fill your ice-cube tray from the hot tap and place it in the freezer. Panic over. Or is it?

Scientists are still not clear about the precise mechanisms behind this counter-intuitive phenomenon – or even if the Mpemba effect exists at all, since it’s proven maddeningly difficult to reproduce consistently. In the latest twist, two physicists have mapped out a generalized theoretical framework for how such an unusual event might occur in simple systems. “The Mpemba effect is not something special to water,” says Oren Raz of the Weizmann Institute of Science in Israel, who developed the theory with Zhiyue Lu from the University of Chicago in the US (PNAS 114 5083). “There should be different systems with essentially the same effect.”

Raz and Lu’s theory also predicts an inverse Mpemba effect: that under certain conditions, a colder system could heat up faster than a warm one. If true, it would be welcome news for those who believe cold water boils faster than warm or hot water, which has been largely dismissed to date as a scientific myth. Their work has also inspired scientists from Spain to devise their own theoretical model showing that the Mpemba effect could occur in granular fluid consisting of spheres suspended in a liquid.

Challenging convention

The notion that hot water freezes faster than cold is named after Erasto Mpemba. In 1963, while he was a schoolboy in Tanzania, he noticed that his home-made ice cream froze faster than his schoolmates’ batches if he didn’t cool the boiled milk before placing it in the freezer. In fact, not cooling their milk before freezing was common practice among local ice-cream vendors at the time. But Mpemba’s observation didn’t tally with what he’d been told about Newton’s law of cooling, which says that the rate at which a body cools is proportional to the difference in temperature between that body and its environment.

Photograph of water partly frozen

The young Mpemba challenged his teacher to explain his observation, and was roundly ridiculed for his trouble (the teacher sarcastically dismissing it as “Mpemba’s physics”). But when Denis Osborne, a physicist at University College Dar es Salaam, visited Mpemba’s school, the boy posed the same question. Osborne promised to try the experiment when he returned to his university. Personally, he thought the boy was mistaken, but felt no question should be ridiculed, and conceded there might be other unknown factors affecting the rate of cooling. Much to Osborne’s surprise, the experiments worked and he ended up co-authoring a paper with Mpemba in 1969 (Phys. Ed. 4 172).

The Mpemba effect has been a staple of DIY educational home experiments ever since, but he was not the first to notice it. Around 350 BC Aristotle observed that it was local custom to put water in the Sun first if one wanted the liquid to cool more quickly. Roger Bacon and (four centuries later) Francis Bacon also argued for the existence of such an effect, as did René Descartes. And over the last 10–15 years, scientists have been looking more closely at the Mpemba effect, hoping to tease out the precise causes of such a counter-intuitive phenomenon. The Royal Society of Chemistry even sponsored a competition in 2012, inviting scientists from around the world to proffer their explanations; yet none of the more than 20,000 papers submitted yielded a broad consensus.

Rival explanations One of the most common explanations put forward by scientists over the years centres on the influence of convective heat transfer, in which water forms convection currents as it heats, transferring hot liquid to the surface, where it evaporates. As a result of this effect, an open cup with hot water would evaporate more quickly than a similar vessel with cool water, with the remaining liquid therefore freezing faster. But this would limit the effect to open-topped vessels, and some experiments have observed the effect in closed vessels too.

Supercooling – where water can remain a liquid at well below its usual freezing point – may also be involved, provided that the water is sufficiently free of impurities, which otherwise help liquids crystallize into a solid. Indeed, in 1995 David Auerbach – a physicist then at the Max Planck Institute for Fluid Dynamics in Göttingen, Germany – carried out experiments that suggested that cold water will supercool to a lower temperature than hot water (Am. J. Phys. 63 882). His experiments revealed that the Mpemba effect occurs when ice crystals appear in a supercooled liquid at higher temperatures, which means that, in such cases, hot water would appear to freeze first. In 2009, however, Jonathan Katz from Washington University in St Louis suggested that perhaps solutes like calcium carbonate or magnesium carbonate in cold water hold the key – they slow down the freezing process, giving hot water the edge (Am. J. Phys. 77 27).

Photographs of ice cream and coffee

More recently, chemists running molecular simulations have suggested the Mpemba effect might be linked to the unusual nature of hydrogen bonding in water (J. Chem. Theory and Comp. 13 55). These inter-molecular bonds, which are weaker than the covalent bonds holding the hydrogen and oxygen atoms within each molecule together, break up when water is heated. The water molecules then form fragments and realign into the crystalline structure of ice, kicking off the freezing process. Since cold water must first break those weak hydrogen bonds before freezing can begin, it makes sense that hot water would start to freeze before cold. “We tend to assume that low-temperature water should be closer to crystallization,” says William Goddard, a chemist at the California Institute of Technology (Caltech), who has modelled similar mechanisms showing that lower-temperature water is actually farther from that point (2015 J. Phys. Chem. C. 119 2622).

Unfortunately, none of these proposed explanations have proven convincing enough to sway sceptical scientists. And more recent attempts to reproduce the effect consistently in lab experiments have been inconclusive. Charles Knight, who studies ice at the National Center for Atmospheric Research in Boulder, Colorado, memorably recalled to Physics World (February 2006 pp19–21) his own experiments, stuck in a room at –15 °C waiting for water to freeze in ice-cube trays. Despite his best efforts at uniformity, some trays started freezing within 15 minutes, others took more than an hour.

That kind of high variability is typical of Mpemba experiments. “It suggests to me that if the effect does exist, then it depends on factors that people are still not controlling very well,” says Greg Gbur, a physicist at the University of North Carolina, Charlotte, who has long been fascinated by the Mpemba effect. “There are many other parameters that could be coming into play, small differences between two seemingly identical samples, other than temperature. When things are changing very rapidly, there’s all sorts of internal dynamics that could be affecting it.”

Some scientists doubt the effect even exists at all. Henry Burridge of Imperial College London is one such sceptic. Last year, he and his colleagues measured how long it took hot and cold samples of water to cool to 0 °C, typically the temperature at which water freezes. They observed nothing in any of those experiments that would serve as evidence of any kind of Mpemba effect, according to Burridge (2016 Sci. Rep. 6 37665).

Still others have argued that this might not even be the right parameter to measure, since in many cases water will not freeze at its so-called freezing point. Furthermore, is something considered frozen when the first ice-crystals form, or when the liquid in a given container is completely frozen? “Originally [the Mpemba effect] was stated as hot water freezes first,” says Raz. “But how do you decide the point in time when something freezes? It’s not a point in time, it’s a process.”

Out of equilibrium

That’s why the new theoretical framework developed by Raz and Lu focuses on a different parameter that doesn’t depend on a specific definition. Instead, it treats cooling processes as being out of equilibrium. A system is said to be in equilibrium when its basic properties do not change with time. All you need to understand, for example, a perfectly diffused gas enclosed in a box, is its volume, temperature and the total number of gas molecules.

But many natural phenomena – from earthquakes and air turbulence to rapid cooling or climate change – occur when things are far from equilibrium in an open system. To understand such non-equilibrium phenomena, you need many more than just three numbers. Whereas the average behaviour of the molecules in a box at equilibrium will be largely the same at every point, in non-equilibrium conditions the temperature can be different at every point and the density can be different at every point. That’s what makes non-equilibrium systems such a challenging field of research.

ice cubes

Raz and Lu came up with this idea over coffee when both were at the University of Maryland, College Park. Raz had read a recent paper on “Markovian” systems, which are those where an object is coupled to a thermal bath that is not affected by the system. One example of a Markovian system is a cup of hot coffee connected to the atmosphere: when the coffee cools, the atmosphere essentially doesn’t change. A refrigerator, however, is affected if you put a cup of hot coffee inside, making it a “non-Markovian” system.

The paper looked at how Markovian systems relax to equilibrium, and Lu thought it might be related to the Mpemba effect. In the simplest version of their model, they consider a base system in equilibrium, such as the cold interior of a fridge, and two initially hotter systems, with one being relatively hotter than the other. As they cool, these two systems relax toward the base state of equilibrium. Raz and Lu showed that under these conditions, the hotter system can bypass the cooler one in terms of the rate of change in temperature, essentially taking a shorter “path” to equilibrium; that is, cooling faster. So whereas a hot coffee on your desk chills according to Newton’s law of cooling, the coffee placed in a fridge cools differently as the coffee interacts with the fridge in a kind of “quench”.

In their simulations, Raz and Lu actually discovered the inverse Mpemba effect first because Raz had been modelling heating processes and they found it easy to set the parameters to produce an inverse heating effect. It was only afterwards, by reversing that model, that they produced a more generally applicable Mpemba-like effect. But to make sure that this bypassing effect wasn’t limited to just that one model, they extended it to a more complicated system known as an “Ising model”, which is widely used in physics to model phase transitions in everything from ferromagnetism and protein folding to neural networks and the dynamics of flocking birds.

The Ising model is typically depicted as a 2D lattice, with – in the case of magnetic materials – a particle at each point on the grid. Each particle can be in one of just two states: either spin “up”, or spin “down”. The spins like to line up in parallel with their neighbours because doing so lowers the overall energy of the system. Indeed, if you cool a ferromagnetic material below a critical point – the “Curie temperature” – the spins adjust themselves until they are all perfectly ordered, forming a state of equilibrium: a ferromagnet.

A Mpemba-like effect can be observed if you have two non-magnetic systems above the Curie temperature and couple them to a cold heat bath that lies below the Curie temperature. As the system cools, the spins will flip so that they line up in parallel and lose their excess energy to the heat bath. If the “hot” system magnetizes before the “cold” one, you have a Mpemba-like effect. What’s more, if the spins gain energy from the bath and flip anti-parallel, you see the inverse Mpemba effect. Raz and Lu actually studied anti-ferromagnets (not ferromagnets) in which the spins want to line up anti-parallel with each other, but the principles are the same. Also, they did not strictly observe a phase transition as they did not study a 2D system, but a 1D Ising chain with 15 spins, where the links only interact with their nearest neighbours. “But you don’t need the phase transition to see the effect,” says Raz. “It’s enough to see that the staggered magnetization – the difference in magnetization between neighbours – crosses, namely that the initially hot system has lower value, and it becomes larger before that of the cold system.”

Sceptical minds

Ever the sceptic, Burridge declares the work to be “an interesting theory, but it is not demonstrated that such effects can be observed in any practical situation”. The authors admit as much in the introduction of their paper. These are very simple models to demonstrate a general proof of principle, and Raz and Lu have not yet extended their theory to water, which is a highly complex system that is very difficult to simulate. “Water is complicated, with many unusual properties,” says Raz, pointing out that ice, for example, is less dense than water – not more dense, as one might expect.

Still, Gbur thinks this new theoretical framework is “possibly a game changer” in terms of the Mpemba effect and it has already inspired studies of it in granular materials. “Previously, there’s never really been a quantitative study showing it is possible for hot things to freeze or reach equilibrium temperature faster than colder things,” he says. Goddard calls it “an elegant exposition, and a novel mathematical analysis”, although he admits to being sceptical that it will ultimately explain the Mpemba effect in water.

It all comes down to what happens next. “We’ve got on the one hand a lot of uncertain experiments, and on the other hand we’ve got a nice theoretical model, but only for simple systems,” says Gbur. “The next natural thing would be to find an intermediate system where theory and experiment could be compared directly.” That’s exactly what Raz and Lu are focusing on now, collaborating, for instance, with John Bechhoefer at Simon Fraser University in Canada to identify potential systems that might exhibit the inverse Mpemba effect under the right conditions. They would then be able to tailor an experiment to test that prediction.

It’s yet another step toward a robust theoretical framework for the phenomenon. Gbur, for one, is rooting for them. “It’s such a neat idea,” he says, “it would almost be a shame if the Mpemba effect turned out not to be true at this point.” Whether your party guests will be satisfied by your explanation about their lack of ice cubes though – well, that remains to be seen.

Granular effects

Photograph of marbles

Oren Raz and Zhiyue Lu’s model of the Mpemba effect has already inspired Antonio Lasanta, Andrés Santos and Francisco Vega Reyes from the Universidad de Extremadura in Spain, along with Antonio Prados from Universidad de Sevilla, to devise their own theoretical model showing a Mpemba effect in a granular fluid, consisting of spherical particles suspended in a fluid (Phys. Rev. Lett. 119 148001). Key to their model, which also predicts an inverse effect, is that their granular fluid contains hard inelastic spheres. So when they collide, the particles lose energy through mechanisms other than thermal loss. “Hot particles” collide more frequently than “cold particles” and can cool sufficiently fast to overtake them, when the former’s initial energy dispersion is large enough.

What’s interesting too is that Mpemba’s original experiments were with milk, which also consists of lots of large particles suspended in water. The Spanish scientists’ work may therefore be a closer model for what Mpemba actually did. It could even prove relevant for water too. After all, if the sample isn’t pure but has similarly big solute particles in it, those impurities could be a contributing factor to the Mpemba effect.

It’s a cracker – the December 2017 issue of Physics World is now out

Physics World December 2017 coverBy Matin Durrani

The December 2017 issue of Physics World, which is now out in print and digital format, has some great treats for you.

We’ve got two ice-related features – one by Jennifer Ouellette on a new “inverse” Mpemba effect, which suggests that cold water could warm faster than hot, and the other by two Norwegian researchers studying how best to treat wintry roads with salt.

Then there’s our festive reviews special, where we cast our eye over some of the best end-of-year reads, ranging from the physics of everyday life to extrasolar planets. Plus we review the “tremendous” new documentary about the Voyager missions.

Finally don’t miss our insight into quantum-computing careers at Google plus our great end-of-year LIGO-related caption competition.

Remember that if you’re a member of the Institute of Physics, you can read the whole of Physics World magazine every month via our digital apps for iOSAndroid and Web browsers.

(more…)

WNT proteins heal dying bone

A team at Stanford School of Medicine has developed a mouse model of osteonecrosis (bone death) to study and treat the disease (Scientific Reports 7 14254). The scientists induced osteonecrosis in the long bones of mice, using cryoablation for 60 s to kill the bone area by freezing it. Once the osteonecrotic lesions were induced in the femur or tibia of the mouse, the “kill zone” remained dead and unable to heal from the induced osteonecrosis.

A team at Stanford School of Medicine has developed a mouse model of osteonecrosis (bone death) to study and treat the disease (Scientific Reports 7 14254). The scientists induced osteonecrosis in the long bones of mice, using cryoablation for 60 s to kill the bone area by freezing it. Once the osteonecrotic lesions were induced in the femur or tibia of the mouse, the “kill zone” remained dead and unable to heal from the induced osteonecrosis.

The scientists surgically treated the osteonecrotic lesions using bone grafts from healthy young and old (above 12 months) mice. Histology and immunostaining revealed high levels of cell proliferation, an early indication of bone healing, by day 5 after surgery. However, by day 14, only bone grafts from young mice were able to form bony bridges, as observed by histology and micro-CT imaging. Bone grafts from old mice were unable to produce new bone during the healing process, indicating that they were less osteogenic (capable of forming new bone).

The WNT pathway (a crucial pathway in developmental biology) is emerging as a therapeutic target in bone disease, because amplified WNT signalling causes an increase in bone mineral density. Thus, the authors hypothesized that an amplification in WNT signalling could rescue the age-related decline in osteogenic capacity observed in bone grafts from older mice. They tested this by incubating bone grafts in WNT3A, a bone-promoting protein, for 1 hr before surgically replacing osteonecrotic lesions.

WNT3A-treated bone grafts from old mice were able to form bony bridges by day 14 after grafting; these were absent in untreated controls. Immunostaining for osteogenic markers (Runx2 and Osterix) revealed significantly higher levels in WNT3A-treated bone grafts compared with untreated controls. Furthermore, treating bone grafts from old mice with WNT3A restored expression levels to those observed in bone grafts from young mice.

WNT3A-treated bone grafts from old mice performed equally well to bone grafts from young mice. By day 14, both groups generated new bone volume at a similar rate and by day 30, all defects showed evidence of bony bridging. WNT3A treatment not only enhanced bone formation in bone grafts from old mice, but also produced high-quality bone similar to that seen in young mice.

Osteonecrosis is a bone disease that usually affects the aged population. And if left untreated, the bone can collapse, which occurs in about 50% of patients. For these reasons, a WNT-based therapeutic approach may prove useful in the treatment of osteonecrosis, especially in older patients.

Rising stars: Anzar Abbas and Katie Wu

Today’s first interviewee is Anzar Abbas, who is studying for a PhD in neuroscience at Emory University. Interestingly, Anzar studied the history of science as an undergraduate, but that led him to want to follow in the footsteps of the great thinkers he was learning about. Now, he uses resting-state functional magnetic resonance imaging to identify patterns in the brain relating to human behaviour. In the interview, Anzar talks about his experiences working with people from different backgrounds and how he believes British academics have more of a culture of communicating science than their US counterparts.

Also featured today is Katie Wu, a PhD student in medical research at Harvard University, who also had an interesting route into science. Having spent two years of university as an English major, Wu realized she was missing learning about maths and science. Taking a sabbatical at the University of Oxford in the UK helped Katie to transition into biological science and she now researches tuberculosis within the department of immunology and infectious diseases. In the interview, Katie speaks about how her arts background helps her to ask questions that others wouldn’t, and how all scientists can benefit from talking about their research with the general public.

On Monday, we published interviews with materials scientist Grayson Doucette and molecular biologist Khady Sall. On Tuesday, we featured physics educationalist Reggie Bain and ecologist Shannon Bayliss. Yesterday, we profiled astrophysicist Chani Nava and quantitative ecologist Will Chen. Stay tuned on Friday for the final pair of interviews with delegates from ComSciCon17.

To hear more voices on the state of science in the US, take a look at the free-to-read Physics World special report on physics in the US. Share your thoughts on the current state of physics in the US by posting a comment below or joining the conversation on Twitter including our handle @PhysicsWorld.

Copyright © 2026 by IOP Publishing Ltd and individual contributors