Skip to main content

Sand dunes repel each other as they move across a landscape

Dunes of sand and other granular materials can be found in large fields on Earth, Mars and beyond. Dune fields tend to self-order into spatial patterns and the physics underlying this self-organization is poorly understood. Now, Karol Bacik and colleagues at the University of Cambridge in the UK have shown that as dunes move, they repel dunes ahead of them via fluid turbulence, preventing collisions and stabilizing dune fields.

Theoretical models usually assume that dunes act as autonomous, self-propelled agents, that exchange mass with other dunes remotely – via air or water flow – or through collisions. But the latest research, described in Physical Review Letters, suggests that dunes can interact without exchanging mass.

Bacik told Physics World that the dynamics of dune fields are an intriguing and fundamental physical problem, “because we’ve got these large landscapes with isolated dunes, and the question is how would they evolve”. He adds that for their simplified experiment the team took the dune field apart and “did a pair wise interaction with two dunes only”.

Rotating paddles

They set-up their two-dune system in a circular water-filled tank mounted with a set of rotating paddles submerged near the surface. Two cameras recorded changes in dune shape, sediment transport and water flow. To create the dunes, they placed two 2.25 kg piles of glass beads on the base of the tank (see video). When they rotated the paddles the flow quickly shifted the piles into a characteristic dune shape – a steep downstream face and shallow upstream face. The dunes then started to migrate downstream.

The researchers found that although the dunes had equal mass they did not move at the same speed. The upstream dune – the one first hit by the flow – moved at a constant speed, but the downstream dune did not. Initially it moved faster than the upstream dune, but then as the gap between the dunes increased it began to slow. Eventually an equilibrium was reached, with both dunes moving at the same speed.

Upstream wake

Analysis showed this dune-dune interaction was created by the fluid flow and the turbulent structures generated in the wake of the upstream dune. The wake created by fluid flowing over the upstream dune pushes the downstream dune away.

“It looks like repulsion, because this interaction means they space out, rather than the opposite, which would be that they collide,” Bacik says. He adds that although the interaction is there all the time “the strength of this wake decays away from the dune, so as the downstream dune keeps escaping, the interaction is weaker and weaker”. And, when the dunes initially shift, or there is a surge in the flow, the interaction increases and pushes the downstream dune a bit harder– “it gives it an extra kick”.

The experiment revealed that dune-dune repulsion is a robust phenomenon. If the dunes were close enough to start with, dune separation occurred at all flow rates explored.  It even persisted when the dunes were different sizes. When the researchers created a downstream dune that was 2.5 times larger than the upstream dune, they found that initially the smaller upstream dune migrated faster, but as the gap closed the wake-induced repulsion grew stronger and the larger downstream dune sped up. Again, the two dunes eventually began to move at the same speed.

Satellite images

Although the experimental dunes were underwater, Bacik is confident that dunes on land behave in the same way, with the interaction mediated by air turbulence. “Both gases and liquids are fluids from the physical point of view, so they obey the same equation,” he explained. Bacik adds that satellite images studied by other researchers suggest a similar interaction between desert dunes. These movements, however, occur over decades, not hours. “The bigger the dunes the slower they move,” Bacik explains.

Hans Herrmann, a physicist at ESPCI Paris, is not surprised by the findings, but is impressed by the work. He told Physics World, “I never had any doubt that there exists a hydrodynamic interaction between dunes irrespective if they are in water or in air. But it is extremely difficult to include such an interaction in a model. Therefore, all models do neglect them.” Herrmann adds that the work “is a beautiful experimental measurement of the consequences of these forces and thus a very useful and very important contribution”.

Next, the team hope to conduct more thorough analysis of satellite images of dune fields to see how prominent this interaction is – and how it works – in more complex, real-world environments.

Plans for African Space Agency jeopardized by lack of progress

Progress on establishing a space agency for the African Union (AU) has come to a standstill over a lack of funding and questions about how it will operate. That’s the message in a new  progress report on Agenda 2063 – the AU’s blueprint for development and transformation in the region. It has revealed that progress on the agency is being restrained by delays in “financial and structural” planning by member states.

Legislation to establish the African Space Agency was passed by the AU in 2017 and two years later Egypt was chosen to host its headquarters, with the agency supposed to be up and running in 2023. Plans for an ASU were included as a flagship programme for Agenda 2063, but the new Agenda 2063 progress report, released on 8 February ahead of the AU’s 33rd heads of state summit in Addis Ababa, Ethiopia, says that little progress has been made since then.

It might be instructive to take a step back and ask what exactly is it that the African space agency will do once it is established

Peter Martinez

The AU’s space programme has managed to set priority areas for a prospective space programme, while two of the four “baseline studies” – gaps on navigation and positioning in Africa and studying the African private sector – have been completed. But it is still not clear what implications the agency will have on the budgets of African countries.

Costs and benefits

Peter Martinez, a space-policy analyst and executive director of Secure World Foundation, says that the lack of progress could result in delays beyond 2023. with the agency perhaps even being scaled down in scope. “The pace at which the AU can make progress is determined by how fast member states are willing to move on an issue, and this is particularly the case when there may be budgetary implications for them,” Martinez told Physics World. “If countries can see direct benefits flowing back to them, they are much more likely to support the agency financially. On the other hand, if the agency becomes an aggregator of the space technology requirements for African countries, which are then met by entities from outside the continent, the agency will have failed, in my view.”

Martinez adds that countries may just come together to collaborate on specific projects without the need to set up a new continental entity. “It might be instructive to take a step back and ask what exactly is it that the African Space Agency will do once it is established. What is its value proposition for African countries, ranging from those with established space capabilities, to those that still have to take their first steps.”

The AU says that it will now work to tackle the issues raised in the report. It says it will also commit resources to help African countries train scientists in earth observation, satellite communication, navigation and positioning as well as astronomy.

A salty solution for the coffee-ring effect

Trace amounts of salt can help control the “coffee-ring effect” that occurs when a solvent evaporates from a solution containing non-volatile particles. The technique, which was developed by researchers in China, works for a variety of substrates, including graphene, graphite and polymers, and could help engineers deposit more uniform coatings and dyes.

Although the coffee-ring effect gets its name from the familiar stain left behind when coffee drips down the side of a cup and spreads around the base, it occurs in many other liquids, too. In this well-studied process, the edges of a drop of liquid on a surface become pinned to that surface, preventing the drop from shrinking as the liquid evaporates. Instead, the drop flattens out, pushing the liquid – and anything suspended in it – to the edges. By the time the drop evaporates completely, most of the suspended particles have reached the edges and remain behind as a dark ring.

As well as being an unsightly nuisance on your desk or table, the coffee-ring effect is the bane of many an industrial process (including printing, coating, dyeing, complex assembly and micro- and nanofabrication) because the ring structures it produces are so non-uniform. Researchers have explored many strategies for avoiding the effect, from designing paints and inks that produce an even coating when they evaporate to changing the shape of the suspended particles. Various groups have also tried adding surfactants, polymers, sol-gel inducers, co-solvents and even proteins to the solutions, while others have experimented with using external optical and electrical fields to modify the ring-forming process.

A new and efficient technique

A team of researchers led by Haiping Fang of East China University of Science and Technology and Guosheng Shi of Shanghai University has now developed a new and efficient technique that involves adding trace amounts of various salts to the solution. When the researchers tested their technique on a solution containing dyes placed on graphene, polymers and other substrates that contain chemical structures known as aromatic rings, they found that the colour of the dye remained even across the substrates. Molecular dynamics simulations revealed that strong interactions (known as cation-π interactions) between the hydrated cations in the solution and the aromatic rings in the substrates work to inhibit the coffee-ring effect, since they promote a more uniform adsorption of suspended matter onto the substrates.

In their experiments, Fang, Shi and colleagues created solutions with different concentrations of sodium chloride (NaCl) by mixing the salt into aqueous suspensions of polystyrene microspheres. They then placed drops of the salty microsphere mixtures onto a graphene substrate grown via chemical vapour deposition. As a control, they tested solutions without NaCl. In separate experiments, they also tried the technique on substrates such as glass, which do not contain aromatic rings.

No ring effect with 8.0 mM NaCl

After allowing the drops to evaporate at around 10° C, the researchers used optical microscopes and greyscale analysis to study the dried patterns left on the substrates. In suspensions without NaCl, they observed ring-like patterns with a dark rim and a light grey centre on the graphene. For the salty suspensions, however, the rings were absent, and the image contrast between the rim and the centre of the pattern gradually diminished for mixtures with increasing NaCl concentrations. At a concentration of 8.0 mM NaCl, the pattern appeared completely uniform.

The team, who report their work in Chinese Physics Letters, found the same effect with other salts such as lithium chloride (LiCl), potassium chloride (KCl), calcium chloride (CaCl2) and magnesium chloride (MgCl2) at different concentrations. They also observed similar behaviour with other aromatic-ring substrates, including natural graphite and a common thermoplastic resin called polyethylene terephthalate (PET). When the experiments were repeated with glass substrates, however, the coffee-ring effect lingered despite the addition of salt, confirming the importance of the hydrated cation-π interactions with aromatic rings.

Cation-π interactions remain strong

Cation-π interactions are known to play crucial roles in the structure, dynamic processes and functions of both living and non-living systems. Until recently, researchers believed that these interactions weakened when the cations were hydrated, so they usually neglected them. Earlier work by Fang and Shi’s team, however, suggested that this approach was incorrect. “Our previous theoretical and experimental studies showed that these interactions remain strong enough to result in the strong adsorption of hydrated cations on graphitic surfaces (such as carbon nanotubes, graphene, graphite and graphene oxide),” Fang explains. “This is because the polycyclic aromatic ring structures of these materials themselves include more π electrons.”

Shi notes that since the cation-π interaction also exists between other cations and aromatic rings, metal ions such as Fe2+, Co2+, Cu2+, Cd2+, Cr2+ and Pb2+ might prove just as effective as salt for controlling the coffee-ring effect. The team’s future studies will focus on choosing the best cations for different applications, he tells Physics World.

Making metallic glasses more plastic

Metallic glasses are promising materials for structural engineering, but their poor ductility makes them brittle, limiting their applications. Researchers in China have now shown that these glasses can be made much softer by reducing their size down to the microscale.

As their name implies, metallic glasses have the properties of both metals and glasses – they contain metallic bonds and are thus conducting, but their atoms are disordered like in a glass. These metastable materials are produced by rapid quenching from the liquid state and their physical properties depend on how they have been processed. They can be made more plastic to some extent by applying stress or high temperatures, but the effect – which is related to structural disordering in the material – is limited.

A marked rejuvenation effect

Researchers led by Bao-An Sun and Hai-Yang Bai of the Institute of Physics at the Chinese Academy of Sciences in Beijing are now reporting on a marked “rejuvenation”, or softening, of metallic glasses that have been drawn while still hot into micron-sized wires. Compared to their bulk counterparts, the modulus and hardness of these wires are much lower, decreasing by 26% and 17%, respectively.

“Such pronounced rejuvenation is unusual for metallic glasses,” explains Sun, “with previous studies reporting on only a few percent decrease in modulus and hardness.”

Higher free volume content

According to the researchers, the hot-drawn micron-sized metallic glass wires become softer and more plastic for two reasons. First, the severe thermomechanical shearing involved in the hot-drawing process provides the material with energy, thereby increasing its “free volume” (a measure of the spacing between molecules). Second, the material’s subsequent rapid cooling “freezes” this induced free volume in place. As the size of the wires decreases, more free volume is induced, and this, coupled with a higher cooling rate, produces more pronounced rejuvenation.

“Our results clearly suggest that we can significantly rejuvenate metallic glasses through a proper combination of temperature, shear stress and size reduction,” Sun tells Physics World. “Our technique thus provides us with a new way of tuning and designing the structure and properties of these materials.”

The researchers, who report their work in Chinese Physics Letters, say they will now focus on controlling the rejuvenation of the metallic glass wires in a more quantitative manner – by determining the exact role of temperature, stress and size reduction in the process they have developed

Birds learn from watching TV, eye-catching science images and levitating blood

When faced with potential prey, how do predators know to avoid those that taste disgusting and potentially contain toxic chemicals? According to a study published earlier this week, for some birds at least, watching TV can help. The researchers, from Finland and the UK, showed that by watching videos of other birds eating, blue tits and great tits learn to recognise bad-tasting prey by their markings, without having to taste them first.

Many insects have conspicuous markings and bitter-tasting chemical defences to deter predators. But these warning markings are only effective once the birds learn to associate them with a disgusting taste – a skill that could potentially increase both the birds’ and their prey’s survival rate.

In the study, the researchers showed each bird a video of another bird’s disgust response – including vigorous beak wiping and head shaking – as it ate a bad-tasting “prey” item (almond flakes soaked in a bitter solution) from a paper packet marked with a square. Afterwards, TV-watching birds given a mixture of the bitter almond flakes and plain flakes in packets marked with a cross ate fewer of the disgusting packets.

“Blue tits and great tits forage together and have a similar diet, but they may differ in their hesitation to try novel food. By watching others, they can learn quickly and safely which prey are best to eat. This can reduce the time and energy they invest in trying different prey, and also help them avoid the ill effects of eating toxic prey,” explains first author Liisa Hämäläinen.

Is this a stunning new piece of modern art, or an information-rich scientific image? The Science and Medical imaging competition run by the Institute of Cancer Research (ICR) highlights some of the most engaging and eye-catching images created by ICR and Royal Marsden researchers as part of their cancer research.

Differentiating brain cancer cells

This year’s winner was “Differentiating brain cancer cells” by PhD student Sumana Shrestha. Taken using confocal microscopy, the colourful image shows neural stem cells from mice that are being used to study the aggressive brain cancer glioblastoma. Impressively, Shrestha also won the public vote, chosen via social media from eight of the competition’s most highly rated images, with a scanning electron microscopy image of dimpled “golf ball-like” microparticles that could be used to deliver drugs into the body.

Other images on the public shortlist included the first ever super-resolution microscopy image of focal adhesions – molecules that help cancer cells move and spread around the body – and a 3D image of an invading melanoma cell. The full selection of winning and shortlisted images can be seen on the ICR website.

Elsewhere, a US/Canadian research team is levitating human blood to detect opioid addiction. The researchers are using magnetic levitation to separate proteins from blood plasma. When separated, plasma proteins with different densities levitate at different heights and become identifiable. Optical images of the levitated proteins can help identify whether a patient has the possibility of getting a disease or becoming addicted to drugs such as opioids.

Sepideh Pakpour

“We compared the differences between healthy proteins and diseased proteins to set benchmarks,” explains researcher Sepideh Pakpour. “With this information and the plasma levitation, we were able to accurately detect rare proteins that are only found in individuals with opioid addictions.” She notes that the team is particularly excited about the possibility of developing a new portable and accurate disease detection tool.

And finally, it’s time to pay tribute to Larry Tesler, the Apple employee who invented cut, copy and paste, who has died aged 74. Now we’re not saying Tesler had anything to do with the recent rise in plagiarism in academic publishing, but who’d have thought those two little keys CTRL-C and CTRL-V could cause such a stir? Now we’re not saying Tesler had anything to do with the recent rise in plagiarism in academic publishing, but who’d have thought those two little keys CTRL-C and CTRL-V could cause such a stir?

Inquiry-led physics lab courses boost student engagement, finds study

Lab sessions designed to emphasize and teach experimentation skills, rather than reinforce lecture content, improve student attitudes towards experimental physics. That is according to a study by researchers from Cornell University and the Colorado School of Mines, which also found that despite the experimentation labs covering half the number of physics topics, there is no difference in students’ exam performance.

In their study, the researchers randomly assigned almost 100 students who had been enrolled in an introductory, calculus-based physics course, to two different types of lab sessions. All students attended the same lectures and discussions, and took identical coursework and exams, but 54 of them attended lab courses designed to reinforce knowledge introduced elsewhere on the course. The other 43, however, were enrolled in lab classes that had specifically been designed to teach experimentation skills (Phys. Rev. X 10 011029).

We actually had trouble kicking them out of class

Natasha Holmes

Students who followed the traditional lab sessions, which closely followed lecture content, saw students being given instructions of experimental procedures and worksheets to complete. However, the experimentation lab students were expected to make decisions about the design and analysis of their experiments, and how to extend them. Over the course of the semester, instructions were reduced and lab sessions did not mirror lecture content at all.

More engaged

To assess the level of student engagement, observers attended lab sessions – recording each student’s behaviour and noting when they left the lab for the day. All lab sessions lasted 115 minutes, but students in the experimentation group stayed for significantly longer than those in the content-reinforcement labs, remaining on average for 118 minutes compared with 91 minutes. The researchers attributed these differences to the structure of the labs. In the content-reinforcement sessions, students could rush through the set tasks and worksheets, while participants in the experimentation labs were expected to repeat, improve and extend their investigations, without a defined end point.

“We think it’s teaching them to have ownership over their experiments, and they’re continuing to investigate,” says Natasha Holmes from Cornell, who was part of the study. “We actually had trouble kicking them out of class.” The labs also encouraged expert-like experimentation behaviours. Indeed, the observational data showed students in those labs were more likely to repeat their experiments to improve them – with or without prompting – and to identify and interpret disagreements between their data and a given model.

Surveys showed that students in the experimentation labs also had more positive attitudes and perceptions of experimental physics. “Compared to the traditional lab, where everyone’s really doing the same thing and just following instructions, we now have all of the students doing something completely different. They’re starting to be creative,” says Holmes.

Despite the experimentation labs covering half the number of physics topics, there was no difference between results in mid-term and end-of-term exams of the two groups. Emily Smith, at the Colorado School of Mines, says that despite decades of dissatisfaction with traditional physics labs, change has been slow due to concerns about the possible impact on students’ learning. “This study shows directly that the change can happen with no impact to students’ understanding of conceptual physics ideas, and with positive benefits to their behaviour and attitudes toward experimental physics,” she says.

Emmanuel Sabonnadière describes technology trends within optoelectronics

Emmanuel Sabonnadière is the CEO of CEA-Leti, a research institute for electronics and information technologies, based in Grenoble, France. During the recent Photonics West conference in San Francisco, Sabonnadière and his colleagues hosted an event to showcase the institute’s latest developments in microelectronics and nanotechnology. Before the event, Physics World caught up with Sabonnadière to find out about the challenges of integrating photonics and electronics, and to get his opinion on trends such as artificial intelligence and quantum technologies.

Mixed ion beams could enhance particle therapy accuracy

Ion beam radiotherapy offers precision dose deposition, with a low entrance dose increasing to a maximum at the Bragg peak and then falling off sharply. This steep dose gradient, however, makes treatments such as carbon-ion therapy highly sensitive to range uncertainties. As such, there’s a clear need for improved treatment verification techniques.

One recent idea is to add a small amount of helium ions to a carbon-ion treatment beam to enable online monitoring during therapy. Fully stripped helium and carbon ions exhibit roughly the same mass/charge ratio, allowing their simultaneous acceleration in a synchrotron to the same energy-per-nucleon. As helium ions have about three times the range of carbon ions (at the same velocity) they travel straight through the patient and can be used for imaging while the carbon-ion beam provides the treatment.

To assess this proposed helium/carbon beam mixing method, a team headed up by Joao Seco at the German Cancer Research Centre (DKFZ) and Simon Jolly at University College London (UCL) has irradiated phantoms with beams of helium and carbon ions at the Heidelberg Ion-Beam Therapy Centre (HIT) (Phys. Med. Biol. 10.1088/1361-6560/ab6e52).

“We wanted to investigate whether the advantages offered by particle imaging could also be exploited for online treatment verification,” explains first author Lennart Volz, who worked on the project in close collaboration with UCL’s Laurent Kelleter. “Range uncertainty is a key challenge in particle therapy and any accurate method for online treatment verification could greatly benefit patients. The mixed beam could be ideal for this, as it would enable you to see what you treat.”

Detecting range modulation

Since the HIT synchrotron is not set up to deliver mixed beams, the researchers irradiated the phantoms sequentially with helium- and carbon-ion beams of similar energy-per-nucleon, using a 10:1 carbon-to-helium ratio. To monitor the range of the helium-ion beam and carbon-ion fragments, they used a novel range telescope developed at UCL, comprising a stack of thin plastic scintillator sheets read out by a flat-panel CMOS sensor.

Summing the scintillation light yield in each sheet and attributing it to the water-equivalent thickness at the centre of the sheet enabled the creation of depth–light curves. The curves of the carbon- and helium-ion beams were scaled 10:1 and then summed to produce a “mixed-beam” signal.

Investigating sensitivity

The researchers assessed the system’s sensitivity using a PMMA slab phantom containing different sized air slits. They used the difference between the measured light output signal and a reference measurement to quantify range changes. Irradiating phantoms with slits of 2 mm thickness and widths of 5 and 2 mm resulted in relative differences of 40% and 17% (from a solid phantom), respectively, in the residual beam range. This was expected as more of the 8 mm FWHM beam (55%) crosses the larger slit than the smaller one (22%). Even a 1 mm thick, 2 mm wide slit could be observed, with a relative difference of 8%.

Clinical scenarios

To examine a more clinically relevant scenario, the team used the ADAM pelvis phantom to study the effect of bowel gas movements on helium-ion beam range. They generated a prostate cancer treatment plan and irradiated the ADAM phantom using three spots from the plan (with the same energy), incident upon: the tumour isocentre, a spot near the rectum and a spot between the two. They inflated a rectal balloon inside the phantom to air volumes of 30, 45 and 60 ml.

For the spot near the rectum, even the smallest air volume in the balloon caused an observable change in helium range. For larger inflations, the team saw a drastic overshoot in helium range as the beam crossed into the rectum and rectal gas. Similarly, for the in-between spot, the two larger inflations created observable signal changes. At the isocentre, the team saw no significant change with balloon inflation. In a Monte Carlo simulation of the experiment, however, the two larger air volumes caused small changes.

Finally, to investigate the effect of small patient rotations on the observed signal, the team used the ADAM-PETer pelvis phantom. They irradiated the phantom rotated by 2° and 4° around its vertical axis. Both rotations led to a noticeable change in the measured mixed beam signal compared with the non-rotated state, with similar but slightly larger effects seen in simulations.

ADAM-PETer phantom

The findings reveal the potential of using a mixed helium/carbon beam to monitor intra-fractional anatomy changes. The ability to detect range modulation from a narrow air gap affecting less than a quarter of the beam demonstrates the method’s relative sensitivity. And for the more realistic cases, the mixed beam could help detect bowel gas movements and small patient rotations.

The researchers suggest that for anatomical sites subject to slow or non-periodic motion, sequential beams could provide useful information, provided that fast switching of ion sources or beam energy is technically feasible. But when treating moving targets with strong range changes, such as lung tumours, an actual mixed helium/carbon beam would be advantageous.

“Given the potential of the mixed helium/carbon beam, the next step is to generate a real mixed beam, which we are investigating in collaboration with the GSI Helmholtz Centre for Heavy Ion Research and HIT,” Volz tells Physics World. “Long-term, we would like to investigate generating high-resolution online helium radiographs with a mixed beam.”

Quantum diffusion of heavy defects defies Arrhenius’ law

“Massively heavy” atoms can move quantum mechanically within a crystalline material at cryogenic temperatures. This result, from researchers in Japan, France and the UK, contradicts the generally-held notion that only hydrogen or helium atoms are light enough to migrate through materials in this way. The study, which was performed on defect clusters containing around 100 atoms of tungsten (atomic mass 184), represents a step forward in our understanding of the low-temperature dynamics of defects and could lead to new applications in materials science and engineering.

A perfect crystal is a purely theoretical concept. Real-world crystals contain defects that can severely degrade the mechanical properties of the materials in which they occur. Understanding the way these defects diffuse and interact is therefore important for a wide range of processes in materials science and metallurgy, including alloying, precipitation and phase transformations.

Defects are bound to so-called static trapping centres (often atoms of impurities within the crystal), and thus need to “de-trap” before they can travel. For elements heavier than hydrogen or helium, de-trapping is thought to occur by thermal activation, and defect diffusion rates typically obey Arrhenius’ law – a century-old empirical rule that describes how the rate of chemical reactions varies with temperature. In a material at very low temperatures, Arrhenius’ law implies that the transport of heavy-atom defects slows considerably and may even become “frozen”.

Studying self-interstitial defects

Experiments by a team of researchers at Shimane University, Nippon Steel, Nagoya University and Osaka University in Japan, the CEA and CNRS in France, and the University of Leeds and Culham Centre for Fusion Energy in the UK have now turned this idea on its head. The team studied a type of defect that occurs when excess atoms of the same type as the ones that make up the material’s crystal lattice become misplaced within the regular stack. These “self-interstitial atoms” (SIAs) cause distortions and stress in the lattice structure, and the researchers studied how clusters of them moved through a tungsten sample at cryogenic temperatures.

The team created both SIA defects and vacancies – that is, lattice sites with “missing” atoms, which are the counterparts of SIAs – by irradiating the tungsten with a high-energy (2000 keV) electron beam at 105 K. They then aged the sample at 300 K, which allowed the SIA clusters to nucleate, grow to nanometric sizes and bind to trapping centres.

At these temperatures, the researchers note that defects are thermally immobile and remain dispersed throughout the sample. Their next step was to illuminate the sample with a lower energy (100-1000 keV) electron beam. The energy of this second beam is too low to create additional SIAs but high enough to athermally move the vacancies around and cause trapped clusters of SIAs to become de-trapped. This de-trapping can occur via thermal and quantum-mechanical mechanisms.

Quantum transport of heavy defects

By measuring the clusters’ motion frequency using in situ transmission electron microscopy, the researchers say they could distinguish between purely thermal motion and movement caused by quantum-mechanical processes. To their surprise, they found that the quantum-assisted de-trapping of the defects leads to low-temperature diffusion rates that are orders of magnitude higher than that allowed by Arrhenius’ Law.

“Our results show that quantum transport, even of heavy defects, becomes dominant below around one-third of the Debye temperature (which is the approximate temperature below which quantum effects may be observed),” says study lead author Kazuto Arakawa. This behaviour, he explains, stems from the quantization of atomic vibrations of the crystal lattice. These quantized vibrations, known as phonons, drive the stochastic fluctuations of objects that are themselves too heavy to move quantum mechanically – a phenomenon that is likely to hold true for low-temperature defect transport in most crystalline materials.

The new finding will impact a wide range of fields across materials science and engineering – wherever low-temperature processes related to defect transport or diffusion are important, Arakawa says. The term “low temperature” is relative: beryllium, for example, has a Debye temperature of 1280 K, so even at room temperatures, the diffusion of beryllium defects is likely to be a predominantly quantum phenomenon.

Developing materials for extreme environments

Arakawa believes the team’s result could be important for understanding and developing microstructures that work in environments with high levels of radiation and/or mechanical shocks, both of which cause defects to form. It may also be relevant in processes such as the irradiation of semiconductors and superconductors, where defects are generated deliberately to manipulate material properties. Finally, Arakawa thinks it could pave the way for materials-processing techniques that exploit quantum-assisted transport and reactions between defects at temperatures close to absolute zero – something that has never been attempted, let alone achieved.

The work could have even more far-reaching consequences, he adds. Until now, most observations of atomic transport in crystals at cryogenic temperatures were interpreted using Arrhenius’ law. The fact that heavy defects move faster than expected at these low temperatures suggests that the materials-science community may need to revisit and reinterpret previous low-temperature experiments.

“Classic observations performed at cryogenic temperatures – for example, the recovery of electrical resistivity of materials exposed to irradiation near absolute zero, or low temperature internal friction studies – could now be analysed in a completely new light,” Arakawa tells Physics World.

The research is detailed in Nature Materials.

A broader range of experiments

A photo of Anatole von Lilienfeld holding circuit boards

The term “machine learning” means different things to different people. What’s your definition?

It’s a term used by many communities, but in the context of physics I would stick to a rather technical definition. Machine learning can be roughly divided into two different domains, depending on the problems one wants to attack. One domain is called unsupervised learning, which is basically about categorizing data. This task can be nontrivial when you’re dealing with high-dimensional, heterogeneous data of varying fidelity. What unsupervised learning algorithms do is try to determine whether these data can be grouped into different clusters in a systematic way, without any bias or heuristic, and without introducing spurious artefacts.

Problems of this type are ubiquitous: all quantitative sciences encounter them in one way or another. But one example involves proteins, which fold in certain shapes that depend on their amino acid sequences. When you measure the X-ray spectra of protein crystals, you find something interesting: the number of possible folds is large, but finite. So if somebody gave you some sequences and their corresponding folds, a good unsupervised learning algorithm might be able to cluster new sequences to help you determine which of them are associated with which folds.

The second branch of machine learning is called supervised learning. In this case, rather than merely categorizing the data, the algorithms also try to predict values outside the dataset. An example from materials science would be that if you have a bunch of materials for which a property has been measured – the formation energy of some inorganic crystals, say – you can then ask, “I wonder what the formation energy would be of a new crystal?” Supervised learning can give you the statistically most likely estimate, based on the known properties of the existing materials in the dataset.

These are the two main branches of machine learning, and the thing they have in common is a need for data. There’s no machine learning without data. It’s a statistical approach, and this is sort of implied when you’re talking about machine learning: these techniques are mathematically rigorous ways to arrive at statistical statements in a quantitative manner.

You’ve given a couple of examples of machine-learning applications within materials science. I know this is the subject closest to your heart, but the new journal you’re working on covers the whole of science. What are some applications in other fields?

Of course, I’m biased towards materials science, but other domains face similar problems. Here’s an example. One of the most important equations in materials science is the electronic Schrödinger equation. This differential equation is difficult to solve even with computers, but machine learning enables us to circumvent the need to solve it for new materials. Similarly, many scientific domains require solutions to the Navier-Stokes equations in various approximations. These equations can describe turbulent flow, which matters for combustion, for climate modelling, for engineering aerodynamics or for ship construction (among other areas). These equations are also hard to solve numerically, so this is a place where machine learning can be applied.

We have a unique opportunity to give people from all these different domains a place to discuss developments of machine learning in their fields

Anatole von Lilienfeld

Another area of interest is medical imaging. The scanning techniques used to detect tumours and malignant tissues are good applications of unsupervised learning – you want to cluster healthy tissue versus unhealthy tissue. But if you think about it, there is hardly any quantitative domain within the physical sciences where machine learning cannot be applied.

With this journal, we have a unique opportunity to give people from all these different domains a place to discuss developments of machine learning in their fields. So if there’s a major advancement in image recognition of, say, lung tumours, maybe materials scientists will learn something from it that will help them interpret X-ray spectra, or vice versa. Traditionally, people would publish such work within their own disciplines, so it would be hidden from everyone else.

You talked about machine learning as an alternative to computation for finding solutions to equations. In your editorial for the first issue of Machine Learning: Science and Technology, you say that machine learning is emerging as a fourth pillar of science, alongside experimentation, theory and computation. How do you see these approaches fitting together?

Humans began doing experimentation very early. You could view the first tools as being the result of experiments. Theory developed later. Some would say the Greeks started it, but other cultures also developed theories; the Maya, for example, had theories of stellar movement and calendars. All this work culminated in the modern theories of physics, to which many brilliant scientists contributed.

But that wasn’t the end, because many of these brilliant theories had equations that could not be solved using pen and paper. There’s a famous quote from the physicist Paul Dirac where he says that all the equations predicting the behaviour of electrons and nuclei are known. The troubled was that no human could solve those equations. However, with some reasonable approximations, computers could. Because of this, simulation has gained tremendous traction over the last decades, and of course it helps that Moore’s Law has meant that you can buy an exponentially increasing amount of computing power for a constant number of dollars.

I think the next step is to use machine learning to build on theory, experiment and computation, and thus to make even better predictions about the systems we study. When you use computation to find numerical solutions to equations, you need a big computer. However, the outcome of that big computation can then feed into a dataset and be used for machine learning, and you can feed in experimental data alongside it.

Over the next few years, I think we’ll start to see datasets that combine experimental results with simulation results obtained at different levels of accuracy. Some of these datasets may be incredibly heterogeneous, with a lot of “holes” for unknown quantities and different uncertainties. Machine learning offers a way to integrate that knowledge, and to build a unifying model that enables us to identify areas where the holes are the largest or the uncertainties are the greatest. These areas could then be studied in more detail by experiments or by additional simulations.

What other developments should we expect to see in machine learning?

I think we’ll see a feedback loop develop, similar to the one we have now between experiment and theory. As experiments progress, they create an incentive for proposing hypotheses, and then you use that theory to make a prediction that you can verify experimentally. Historically, some experiments were excluded from that because the equations were too difficult to solve. But then computation arrived, and suddenly the scope of experimental design widened tremendously.

I think the same thing is going to happen with machine learning. We’re already seeing it in materials science, where – with the help of supervised learning – we’ve made predictions within milliseconds about how a new material will behave, whereas previously it would have taken hours to simulate on a supercomputer. I believe that will soon be true for all the physical sciences. I’m not saying we will be able to perform all possible experiments, but we’ll be able to design a much broader range of experiments than we could previously.

Copyright © 2025 by IOP Publishing Ltd and individual contributors