Skip to main content

Superconductivity endures to 15 °C in high-pressure material

This article reports on research described in a paper in Nature. This paper has since been retracted by the journal.

Superconductivity has been observed at temperatures up to 15 °C in a hydrogen-rich material under immense pressure – shattering the previous high-temperature record by about 35 degrees. The carbonaceous sulphur hydride material was made and studied by Ranga Dias and colleagues at the University of Rochester and the University of Nevada Las Vegas in the US, who say that it may be possible to reduce the pressure required to achieve room-temperature superconductivity by changing the chemistry of the material.

Superconductors carry electrical current with no electrical resistance and have a range of applications from the high-field magnets used in MRI scanners and particle accelerators to the quantum bits used in quantum computers. Today, practical devices based on superconductors must be chilled to very cold temperatures, which is costly and can involve the use of helium – which is a limited natural resource. Therefore, a long-standing goal of condensed-matter physicists has been to develop a material that is a superconductor at room temperature.

In 2015 Mikhail Eremets and colleagues at the Max Planck Institute for Chemistry and the Johannes Gutenberg University Mainz, both in Germany, made a huge breakthrough when they observed superconductivity at 203 K (–70 °C) in a sample of hydrogen sulphide at about 1.5 million times atmospheric pressure. This new record was a huge leap forward in the quest for a room-temperature superconductor and reinforced theoretical predictions that hydrogen-rich materials could offer a way forward. Indeed, the metallic state of hydrogen – which is expected to occur at extremely high pressures and has yet to be fully characterized – is expected to be a superconductor at room temperature.

Winter’s day in Siberia

In early 2019 Eremets’ team and a group led by Russell Hemley at George Washington University in the US  reported superconductivity at temperatures up to about –20 °C  – a typical winter temperature across vast swathes of Russia and Canada.

Now Dias and colleagues have boosted this temperature to 15 °C, which coincidentally is the average surface temperature of the Earth. They did this by adding carbon to hydrogen sulphide – which was done by mixing methane and hydrogen sulphide together in a photochemical process. Dias told Physics World that part of his team’s success can be attributed to the precision of their synthesis technique, which is done at relatively low pressure. “[The photochemical process] is critical in introducing methane and hydrogen sulphide into the starting material, that allows just the ‘right’ amount of hydrogen needed for such remarkable properties,” he explains.

The team placed their samples in the jaws of a diamond anvil and squeezed them to pressures between about 1.4 and 2.7 million atmospheres. They found a sharp upturn in the superconducting transition temperature at about 2.2 million atmospheres with the maximum temperature of 15 °C occurring at about 2.6 million atmospheres.

Hallmarks of superconductivity

According to Dias, the team was able to make three measurements to confirm that the material is indeed a superconductor. The resistance of the sample was measured using two-probe and four-probe techniques to ensure that it was indeed zero. The researchers also measured the transition temperature as a function of applied magnetic field and found that the temperature dropped as the field increased – which is a hallmark of a superconductor. Finally, they observed that the material expelled magnetic field lines, which is another characteristic of a superconductor.

One shortcoming of the research may turn out to be an important opportunity: the team does not know the structure and exact stoichiometry (ratios of carbon, sulphur and hydrogen atoms) of the material at very high pressure. Dias says that the researchers have “some idea [of the stoichiometry] but don’t know the exact answer”. Determining the structure is difficult because the constituent atoms are too light so see using X-ray diffraction. “We have been developing a new set of tools to solve this problem,” says Dias. Once the team has a better understanding of the structure and stoichiometry they hope to be able to chemically tune the material to be a room-temperature superconductor at lower pressures.

Commenting on the significance of this latest result, Mikhail Eremets told Physics World, “We should keep in mind that truly room-temperature superconductor should be at ambient pressure, which will allow applications.” He adds that the high-pressure studies that began with his hydrogen sulphide work in 2015 provide important information in the search for an ambient-pressure room-temperature superconductor – which he says will be likely a ternary compound. Eremets also believes that this latest temperature record will not stand for long, pointing out that “there are predictions of [superconductivity] even above 400 K at high pressures”.

The research is described in Nature.

Radiologists and space experts to develop imaging tools for space missions

© AuntMinnieEurope.com

The French Society of Radiology (SFR) and the country’s national centre for space exploration (CNES) have signed a partnership, details of which were streamed live at the Journées Francophones de Radiologie (JFR) congress on 4 October. The aim is to develop imaging solutions to be sent on space flights and to collaborate on image collection and optimization, teleradiology and training of astronauts.

France has the largest space program in Europe and the third oldest institutional space programme in history, along with Russia and the US. CNES, which has a long track record in space exploration, recognizes the great potential of diagnostic imaging for monitoring astronauts’ health while on missions, according to general director Lionel Suchet.

The plan is to create a “two-way street” in which radiologists and space experts will collaborate on innovative projects to make further progress, JFR delegates heard online at the plenary Antoine Béclère lecture. A SFR–CNES working group will now define the research themes and establish a schedule of tasks ahead by December.

Monitoring astronauts' health

Shared interests

Discussing the deal, SFR president Jean-François Meder noted how the collaboration would shed light on the interest of space exploration for imaging and the role of medicine in space.

“We share the same interests: the support of innovation, and research. Today SFR and its members have needs, the first is to be able to discuss and collaborate with real professionals. Your [astronaut] teams have become true imaging professionals,” he said. “The second is the need to dream. This partnership between the SFR and the CNES will allow our radiologists to dream.”

Suchet pointed to the priority of getting crews back in good health after a mission, noting that the priority of the partnership is developing operational medicine on board the space station. The second aspect is to harness imaging in the longer term for research.

Medical exams in space

“Microgravity present in space stations makes them not only science laboratories but also tools for medical research to explore its effect on the human body,” he said.

CNES hopes that working with SFR will allow researchers to better explore health issues associated with this microgravity effect, such as cardiovascular problems, muscle loss, osteoporosis and a certain model of accelerated ageing.

For the SFR, the partnership will be a means to benefit from knowledge gained by astronauts. Discoveries from monitoring the process of accelerated ageing in astronauts, for example, could be applied to patients on Earth, helping radiologists understand the evolution of muscle mass loss, or osteoporosis, markers of patient fragility when they are exposed to cancer treatment, for example.

Beautiful adventure

In a special film broadcast during the JFR session, Claudie Haigneré (rheumatologist, astronaut and counsellor for the European Space Agency) explained how imaging can demonstrate the impact of microgravity – not only on the heart but also on other organs, such as the brain and the eyes. If such transformations can be imaged and followed, they can also be predicted.

Furthermore, after returning to Earth, astronauts undergo a complete reversal of the phenomenon, providing a platform for experimentation to better understand and treat associated symptoms. Such a model of the process of accelerated ageing and of the correction of these anomalies, would contribute to making predictive models of recuperation.

Imaging tools for future missions will be essential elements, but any progress will necessitate innovation and creativity and will also be very important for issues on Earth.

“Together we have to work and perfect the imaging tools that will accompany us. Space exploration is an international co-operative venture, whether this is with the SFR or European industrial bodies which will contribute new imaging methodology, Europe must make its voice heard … because it’s a beautiful adventure, the adventure of tomorrow,” Haigneré elaborated.

Interventional training

With the space station 400 km above the planet, communication from the ground with teleultrasound specialists in real-time is feasible. However for longer missions, communication may take from 20 to 40 minutes for information transferral. In this respect, crews will have to be even more autonomous than they are now for medical matters. Therefore the SFR has pledged as part of the partnership to contribute to astronaut training to improve diagnostics in real time. Such education will cover interpretation of radiological images, methods of image compression and training in interventional radiology.

“It seems to us that interventional radiology, and therapy, will become a major aspect, notably on long missions. Should therapy be needed, minimally invasive interventional radiology will be key,” noted JFR 2020 president Alain Luciani at the session.

Generally, astronauts are selected for their underlying good health. Only one case of venal thrombosis has ever been diagnosed via ultrasound and was treated on the space station, according to Suchet. Nevertheless diagnosis is paramount should the need arise.

Teleradiology

At present, astronauts use ultrasound for different organs including the heart and brain, and eye tomography for potential ocular problems related to microgravity. Imaging in space would need to move beyond just ultrasound, noted Luciani. Interventional radiology will need to be made compatible with a long-distance mission and safe for the astronauts and would need to be as small as possible. Telemedicine will be another convergent theme between the SFR and CNES.

Ahead to the future

In a later stage of the partnership, techniques for noise reduction from the research team at CNES could be applied to radiology, the presenters noted.

There will also be a chance for collaboration on image analysis using artificial intelligence (AI). CNES uses AI analysis of satellite images of the Earth to understand phenomena linked to climate change. Techniques based on AI algorithms and data processing to mine the useful data from the mass of information and are very similar to those needed for advanced medical image analysis, according to Suchet, pointing to this other fruitful area of collaboration, where the partners could enrich and help each other.

  • This article was originally published on AuntMinnieEurope.com ©2020 by AuntMinnieEurope.com. Any copying, republication or redistribution of AuntMinnieEurope.com content is expressly prohibited without the prior written consent of AuntMinnieEurope.com.

How capable are today’s quantum computers?

Media coverage of quantum computing often focuses on the long term potential for these devices to leave classical computing in the dust. But what about the rudimentary quantum systems that are already being developed and tested by technology companies? What are the latest advances in the field? And what might these systems realistically be able to achieve in the short to medium term? Andrew Glester investigates these questions in the latest episode of the Physics World Stories podcast.

The episode previews Quantum 2020, a free online event running 19–22 October hosted by IOP Publishing (which also published Physics World). Tim Smith, associate director for journals product development, describes how the conference will cover the latest developments across quantum science and technology. While Claire Webber, associate director for content and engagement marketing, explains how you can participate in the event.

Glester then catches up with one of the speakers at Quantum 2020 – Ryan Babbush, head of quantum algorithms at Google. In 2019 Google made headlines after asserting that its Sycamore quantum processor was the first to achieve “quantum supremacy”, whereby a quantum computer solves a problem in a significantly shorter time than a conventional computer. Although the specifics of that claim have been disputed, it was still celebrated as a key breakthrough in the field.

Babbush describes some of the key goals for Google’s first generation of practical quantum computers. One of them is to realize Richard Feynman’s idea of using quantum devices to simulate physical systems that behave according to the laws of quantum physics. Such a system could be used to solve the fiendishly complex chemistry equations required to predict the properties of new materials. Another key goal is quantum cryptography, which could offer secure communication systems.

Black metal hydrogen emerges at high pressures

At pressures of millions of atmospheres, hydrogen – normally an excellent insulator thanks to the tightly-bound electrons in the H2 molecule – becomes an electrical conductor. Its exact transition point is, however, the subject of much debate, with the results of several recent experiments seemingly contradicting each other. Researchers in Italy, Spain and France now say they have resolved the discrepancy by using an advanced computing technique that takes the quantum fluctuations of protons into account when simulating the behaviour of hydrogen at high pressures. The group’s simulations also reveal that hydrogen in this metal-like phase reflects very little light, and would thus appear pitch-black to anyone who observed it – further proof, says study lead author Lorenzo Monacelli, that metallic hydrogen is a very peculiar substance indeed.

Solid hydrogen at high pressures boasts a rich phase diagram, with five insulating molecular phases labelled I to V. The idea that it might become metallic at high pressures can be traced back to a theoretical proposal made by the Hungarian-born physicist Eugene Wigner and his American colleague Hillard Bell Huntington in 1935. Since then, metallic hydrogen has been predicted to exist in the core of gas-giant planets like Jupiter and Saturn. It has also been suggested that metallic hydrogen could be a room-temperature superconductor.

Difficult to stabilize and challenging to characterize

Such predictions are hard to confirm, however, because metallic hydrogen is extremely difficult to stabilize in the laboratory. It is also challenging to characterize, since it does not interact with X-rays and techniques that rely on neutron scattering cannot be used at high pressures. Scientists must therefore infer its structure indirectly, using vibrational spectroscopy techniques like Raman and infrared spectroscopy or optical approaches that measure light transmittance and reflectivity from a sample.

These intense experimental difficulties may well explain why different research groups have obtained apparently contradictory results about the behaviour of hydrogen at low temperatures and high pressures. Optical reflectivity measurements suggest that metallization takes place in atomic hydrogen at 495 GPa, when the element transforms from a black lower-pressure phase to a shiny higher-pressure one. Electrical conductivity measurements, meanwhile, imply that hydrogen in phase III – a molecular phase in which the strength of the hydrogen-hydrogen bonds is thought to become so weak that protons can jump between different molecules – exhibits semi-metallic behaviour at pressures above 360 GPa. Infrared light transmission experiments, for their part, suggest that phase III hydrogen becomes metallic at 420 GPa.

Not contradictory

According to researchers led by Francesco Mauri of the University Sapienza in Rome, these results are not, in fact, contradictory. The team came to this conclusion by simulating the properties of phase III hydrogen under pressures of between 150 GPa and 450 GPa. In their simulations, they used a technique called the Self-Consistent Harmonic Approximation (SCHA) to account for quantum effects on the hydrogen nuclei. Simply put, this technique simulates the wavefunction of the hydrogen nuclei (protons) by sampling their probability cloud – that is, the collection of locations where nuclei are likely to be found. The researchers then solved the electrons’ quantum equations for each configuration of nuclei.

This approach means that nuclei are not treated as static “balls” frozen in position, as is the case in most simulations. Instead, the nuclei have delocalized wavefunctions, the square of which describe the probability of where they might be located at a given time.

Based on these simulations, the researchers found that phase III hydrogen starts conducting electricity at 370 GPa – a result that Monacelli says is “in very close agreement with previous experimental findings” – while reflecting very little light even at the highest pressures. They also found that factoring in quantum effects on nuclei has a dramatic impact on hydrogen’s physical properties. This impact was expected, since the nucleus of hydrogen (the lightest element) is subject to large quantum fluctuations, but Monacelli says that the difference was still impressive.

“We found that the bond length of the H2 molecules increases by 6% (which is huge in terms of the energy involved) if quantum effects are considered,” he explains. “The [simulated] Raman and infrared spectra completely change when these effects are taken into account, with signal peaks shifting by more than 25% and their energy profile broadening a lot.” Such simulations of vibrational spectra are important, he says, because they are “the only way we can understand the crystalline structure of high-pressure hydrogen”. Encouragingly, he adds that the team’s calculated results exhibited “a remarkable agreement” with results from experimental studies of infrared absorption.

Very peculiar

Based on these results (which are detailed in Nature Physics), Monacelli says that if the data from the optical reflectivity experiments are correct, then the high measured reflectivities observed in those experiments are not consistent with hydrogen existing in a molecular phase. Instead, they imply that hydrogen is in its atomic form. “We have proven that molecular metallic hydrogen is very peculiar,” he tells Physics World. “It is conductive, black, and transparent in the infrared. This is almost unique for a metal.”

The calculations of Mauri, Monacelli and colleagues also led them to make a new prediction: deuterium – hydrogen’s heavier isotope, with a neutron as well as a proton in its nucleus – should start to conduct electricity at pressures 70 GPa higher than ordinary hydrogen. “This prediction could be confirmed in the coming months to further support or contradict our results,” Monacelli says.

Fluttering polymer ribbons harvest electrical energy

A new low-cost nanogenerator that can efficiently harvest electrical energy from ambient wind has been created by Ya Yang at the Beijing Institute of Nanoenergy and Nanosystems of the Chinese Academy of Sciences and colleagues. The team reports that the device achieves high electrical conversion efficiencies for breezes of 4–8 m/s (14–28 km/h) and say that it could be used to generate electricity in everyday situations, where conventional wind turbines are not practical.

As the drive to develop renewable sources of energy intensifies, there is growing interest in harvesting ambient energy in everyday environments. From breezes along city streets, to the airflows created as we walk, the mechanical energy contained in ambient wind is abundant. The challenge is to harvest this every in an efficient and practical way. This has proven difficult using existing technologies such as piezoelectric films, which operate at very low power outputs.

Yang’s team based their new design around two well-known phenomena in physics. The first is the Bernoulli effect, which causes the fluttering of two adjacent flags to couple. If separated by a very small gap, the flags will flutter in-phase, while at slightly larger separations, they flap out-of-phase, and symmetrically about a central plane. The second is the triboelectric effect – the familiar phenomenon behind the “static electricity” that is created when different objects are rubbed together and then separated – resulting in opposite electrical charges on the objects and a voltage between the two.

Two polymer ribbons

Using two polymer ribbons; one coated with a silver electrode, and the other with the polymer FEP, Yang and colleagues combined these phenomena to create a “Bernoulli effect-dominated triboelectric nanogenerator”, or B-TENG. When subjected to a parallel airflow, the out-of-phase fluttering of the ribbons causes them to touch and separate periodically; resulting in a build-up of charge which could be used to generate an output voltage.

The researchers showed that their 3×8 cm device created usable electrical energy for wind speeds as low as 1.6 m/s, with conversion efficiencies exceeding 3.2% for speeds between 4–8 m/s. The device’s practicality was demonstrated by using it to illuminating 100 LEDs in a series circuit; integrating it into a self-powering thermometer; and using it to charge a 100 µF capacitor to 3 V in 3 min. Together, these elements were incorporated into a self-powered pressure sensor for a pipeline, using lightweight and extremely low-cost materials.

Yang’s team now hope to improve the B-TENG’s efficiency to make it even more compact – potentially enabling its integration with everyday devices. At the same time, they also hope to scale-up the device to create kilowatt-scale generators, which could compete with traditional wind turbines. If successful, the B-TENG could be used in applications ranging from wearable electronics, which can be charged by the airflow generated by walking; to clean power generation in biodiverse areas, where spinning turbines can be harmful to wildlife.

B-TENG is described in Cell Reports Physical Science.

Positronium formed during PET scans could detect hypoxic tumours

A variation on positron emission tomography (PET) offers a new way to diagnose hypoxia in tumours. Researchers at the University of Tokyo and Japan’s National Institute of Radiological Sciences demonstrated that positronium, which forms in tissues due to positron emission from a radiopharmaceutical agent, decays differently depending upon its chemical environment, and is especially sensitive to local oxygen saturation. This means that signs of tumour hypoxia could be spotted among the gamma rays collected routinely during PET imaging, providing clinicians with an additional source of information to guide treatment decisions.

Most positrons created during a PET scan are annihilated almost as soon as they are emitted: they lose energy through interactions with nearby molecules, then collide with electrons in those molecules to produce pairs of 511 keV photons. Some positrons hang around a little longer, however, and instead of annihilating the electrons that they encounter, they capture them, forming metastable positronium atoms.

When this happens, the positronium is created in one of two distinct configurations. The least stable is para-positronium (p-Ps), in which the spins of the electron and positron point in opposite directions. p-Ps atoms have a mean lifetime of only 125 ps, after which they decay into a pair of 511 keV photons. This process therefore adds to the near-immediate gamma signal produced by annihilation of those positrons that never form a positronium atom.

The other configuration is ortho-positronium (o-Ps), in which the spins of the electron and positron are parallel. Left alone, o-Ps would decay into three photons (with energies ranging from 0 to 511 keV) after a mean lifetime of 142 ns. This much longer period means that o-Ps atoms have more time to interact with their surroundings before they decay.

One of the routes open to o-Ps atoms is an interaction called spin exchange. In this process, the positronium’s electron switches with an electron of opposite spin in a nearby molecule. This converts the positronium atom into the less stable p-Ps form, hastening its decay.

The likelihood of spin exchange occurring depends on the availability of unpaired electrons in the vicinity of the o-Ps atoms. In tissues, such unpaired electrons are present primarily in oxygen molecules. This means that in oxygen-poor environments such as hypoxic tumours, more o-Ps atoms survive long enough to decay via the three-photon route. Measuring the timing and spectrum of the gamma rays emitted during the PET procedure should, therefore, yield information about the oxygen saturation.

As they report in Communications Physics, Kengo Shibuya and colleagues tested this principle by preparing samples of water saturated with either air, nitrogen or oxygen. Each sample also contained the unstable sodium isotope 22Na.

When 22Na undergoes beta decay, it simultaneously emits a high-energy gamma ray at 1.27 MeV. The researchers used this signal as the starter pistol for each measurement. In instances where the positron emission resulted in a positronium atom, the end of the measurement was marked by the detection of sub-511-keV photons announcing the positronium’s final decay.

By comparing the timing and energy of the photons emitted during millions of measurement intervals for the three samples, the team derived a linear relationship between oxygen saturation and positronium decay rate. They calculated that, in a clinical PET scanner, an acquisition time of around 30 min would yield enough measurements to distinguish a hypoxic tumour from normally oxygenated healthy tissue.

In terms of detector hardware, current PET devices are already suited to this task, although the researchers say that they will need new timer systems and software.

“The performance required by the new timers is comparable to those already used in conventional PET,” says Shibuya. “Therefore, I think it will not be difficult for medical device manufacturers to install them.”

Finding the right radiopharmaceutical agent might be more challenging, however. Whereas PET imaging techniques employ pure positron emitters, positronium imaging can only work if, like 22Na, the radioisotope emits a gamma photon and a positron simultaneously. Unfortunately, the 2.6-year half-life of 22Na makes it unsuitable for the clinic, for which sources with half-lives on the order of hours or days are required.

Nanocrystals could drive explosive volcanic eruptions

The formation of nanometre-sized crystals can – even in low concentrations – temporarily change the viscosity of magma and lead to violent volcanic eruptions. That is the conclusion of geophysicists in the UK, Germany and France, who say that their research helps to explain how otherwise calm and predictable volcanoes can turn unexpectedly explosive.

There are essentially two types of volcanic eruption: explosive and effusive. In the former, viscous molten rock with a high silica content tends to trap volcanic gases more readily. This increases the pressure on the magma column such that it can ultimately lead to an explosion. Basaltic magmas, meanwhile, have a low silica content and tend to be runnier. This usually allow gas to escape gently, leading to effusive eruptions that produce lava domes and flows of the kind seen on Hawaii.

This distinction is not clear cut, however, and basaltic magmas can sometimes unexpectedly produce explosive events. This happened at New Zealand’s Mount Tarawera in June 1886, which destroyed ten Māori settlements and killed around 120 people.

Explosive fragmentation

A common explanation for this unexpected explosivity is the growth in molten rock of microlites, which are small crystals ranging in size from 1–100 micron. In sufficient volumes, these can lock up magma flow, leading to explosive fragmentation. This occurs when magma flow switches from being a continuous liquid body containing gas and crystals to a turbulent flow dominated by gases that contain fragments of molten rock.

There is one problem with this theory, however, in that many such unexpected explosions (including Indonesia’s Mount Tambora in 1815 – the most powerful eruption in human history) occurred even though their microlite concentrations were seemingly not at a sufficiently high level of around 30% total volume.

In a new study, volcanologist Danilo Di Genova of the University of Bristol and colleagues considered the effects of nanolites, which are smaller crystals that are the precursors of microlites. Using scanning electron transmission microscopy and Raman spectroscopy, the team revealed the presence of previously-unidentified nanolites (20–50 nm in size) in ash samples from three low-viscosity eruptions: Italy’s Mount Etna (122 BC); the Colli Albani volcano (about 37,000 years ago); and Mount Tambora.

Rapid cooling

The team then created their own nanolites in the lab by melting and then cooling volcanic rock samples. Synchrotron-based X-ray spectroscopy revealed how nanolites grow under rapid cooling regimes of around 10–20 degrees per second. This cooling is typical of that experienced by magma that is ascending quickly at metres-per-second speeds. Further experiments with a synthetic magma, alongside modelling, suggest that the after-effect of so-called undercooling (where a magma, unable to solidify thanks to its dissolved water content, rapidly crystallizes once it degasses) provides the ideal scenario for nanolite growth.

Furthermore, the team found that even in low nanolite concentrations of 5% by volume, the spacing between each nanolite can be very small. “It is already accepted that even a low viscosity melt struggles to flow if it becomes locked by a network of crystals, but this usually requires a fraction of well over 30%,” Di Genova explained. “Nanolites also have a propensity to agglomerate together and form new and larger solid objects. During this process, volcanic liquid remains trapped inside these agglomerates to effectively increase the fraction of solids in the magma.”

Together, he explained, these processes account for how nanolites can increase magma viscosity for a short time as they form – leading to the previously inexplicable “sudden switch in behaviour,” that can cause calm volcanoes “to occasionally present us with a deadly surprise”.

“Nanoparticles suspended in a very low viscosity fluid appear to have a great effect increasing the bulk viscosity of the suspension even at low concentrations – a factor which could, if confirmed for magma viscosity, enhance explosivity during eruptions,” comments Francisco Cáceres, an experimental volcanologist from the Ludwig-Maximilians-University of Munich. He adds, “This work opens a window to new research that needs to be performed on natural magma, in order to test this claim”.

University of Cambridge volcanologist Marie Edmonds adds, “This work will have very important implications for our understanding of basaltic explosive eruptions but also more broadly for magma rheology, nucleation and growth of crystals and bubbles; and the strength of magma under shear. It will be important in the future to understand whether wet magmas – those containing lots of water, such as in subduction zones – behave in the same way as those in the experiments, which were dry.”

With their initial study complete, the researchers are moving to model the explosive impact of nanolites on real-life low-silica volcanic settings. When travel restrictions lift after the pandemic, they will be hunting for more examples of nanolites in volcanic rock from across the globe.

The research is described in the Science Advances.

Physicists place fresh limits on gravity’s role in wavefunction collapse

Our everyday experience shows that the macroscopic world is different from the quantum one. Unlike quantum particles, the objects in our daily existence do not, for example, exist in a superposition of different states. Traditionally, physicists explain the transition between the two worlds by saying that the quantum superposition principle, which is the building block of quantum theory, breaks down when measurements are performed. The wavefunction of this system is then said to “collapse” with the measurement.

Why such a collapse should happen is still unclear, but one model – developed by the mathematical physicist Roger Penrose, and drawing on earlier work by Lajos Diósi – suggests that gravity might play a role. Researchers in Italy, Germany and Hungary have now set important constraints on this so-called Diósi–Penrose model, in a work that could shed fresh light on a long-standing puzzle in quantum theory: why don’t the inherent properties of microscopic systems carry over to macroscopic ones?

Gravity-related wavefunction collapse

In 1996, Penrose suggested that the collapse of quantum superpositions might be caused by the curvature of space–time – that is, by gravity. The effects of gravitation, he reasoned, are negligible at the level of atoms, but increase dramatically at the level of macroscopic objects. Penrose also provided a formula to compute the decay of the superposition, using methods that were similar to Diósi’s earlier work.

In this Diósi–Penrose (DP) model, the gravity-related wavefunction collapse, which depends on the effective size of the mass density of particles in the superposition, induces a random motion of the particles. When the particles are charged (like protons and electrons), this “jitter” produces a characteristic and very faint emission of electromagnetic radiation.

A team of researchers led by Angelo Bassi of the University of Trieste has now computed the rate at which this radiation is emitted by solving the main equation of the DP model. They did this by calculating the number of photons emitted per unit time and unit frequency integrated over all spatial directions in the wavelength range λ∈ of 10−5 to 10−1 nm, which corresponds to energies E∈ of 10–105 keV. In a further development, a team of experimentalists led by Catalina Curceanu and Matthias Laubenstein from the INFN and Kristian Piscicchia from the Enrico Fermi Research Center, both in Italy, went on to measure this calculated radiation rate in an experiment at the INFN-LNGS Gran Sasso underground laboratory in Assergi.

Dedicated underground experiment

According to study lead author Sandro Donadi of the Frankfurt Institute for Advanced Studies, the experiment was designed to be sensitive to the faint X- and gamma-ray radiation that the DP model predicts. To this end, the researchers used a high-purity germanium detector to measure the radiation spectrum at the point at which theory predicts it should be enhanced. They also constructed their whole setup using materials with very low radioactivity and enveloped it in a complex system of shielding. Finally, they carried out their experiments in an underground facility specially built to have low background radioactivity. “All this effort was intended to minimize background noise sources that can mimic the collapse-related radiation that we are looking for,” Donadi says.

In addition to these measures, the researchers carefully characterized the background spectrum produced by known natural contamination sources that cannot be eliminated. By combining these experimental precautions with refined theoretical and statistical analyses of their data, they were able to set a lower bound on the effective size of the nuclei’s mass density. This lower bound is equivalent to about 1 Å, or approximately three orders of magnitude larger than previously reported bounds.

Not ruled out yet

“Our result suggests that more work needs to be done to relate gravity to wavefunction collapse, since it excludes the most natural (‘parameter-free’) version of the DP model,” Donadi tells Physics World. “However, it would be premature to dismiss gravity’s role at this stage.”

Donadi adds that Diósi and Penrose put forward good reasons for believing that there is a tension between the quantum superposition principle and general relativity. “We now intend to investigate possible solutions to this by developing refined wavefunction collapse models based on our recent findings,” he says.

In the short term, the researchers plan to apply a similar type of analysis to other collapse theories, such as the GRW model put forward by Giancarlo Ghirardi, Alberto Rimini and Tullio Weber, and another model known as Continuous Spontaneous Localization (CSL). These other models are more difficult to falsify, Donadi explains, because of the different mathematical relationships between the models’ parameters and the expected radiation emission rate.

“These studies will ultimately push us in the direction of realizing new, more sensitive experimental setups based on new vanguard radiation detectors, data acquisition systems and analyses methods,” Curceanu adds.

The work is detailed in Nature Physics.

Fundamental constants set upper limit for the speed of sound

The upper limit on the speed of sound in solids and liquids depends on just two dimensionless quantities – the fine structure constant and the proton-to-electron mass ratio. That is the surprising conclusion of physicists in the UK and Russia, who calculate that the speed limit is twice that of the highest speed of sound measured to date.

Sound propagates as a series of compressions and rarefactions in an elastic medium, with its speed varying significantly from one material to another. Typically, sound is slowest in gases, higher in liquids and higher still in solids. In air at ambient conditions sound travels at about 340 m/s, while in water it reaches about 1500 m/s and in iron more than 5000 m/s.

These differences are due to the way that a passing wave disturbs atoms and molecules. Thought of as hard spheres linked to one another with springs, the particles are knocked forward by their neighbours in the direction of sound propagation and in turn go on to nudge other neighbouring particles ahead of them. But this transmission is delayed by inertia, meaning that waves move faster when the particles are less massive.

Stiffer links mean less delay

However, stiffer links also mean less delay – each particle having to move less before it triggers the movement of its neighbour. This is why sound travels faster through iron than it does through water, for example.

Expressed mathematically, sound’s longitudinal speed in a material is equal to the square root of that material’s elastic modulus – which quantifies its resistance to compression – divided by its density.

In the latest research, Kostya Trachenko of Queen Mary University of London and colleagues at the University of Cambridge and the Russian Academy of Sciences’ Institute for High Pressure Physics set out to recast this formula in terms of fundamental constants. Their first step was to link a material’s bulk modulus with the energy that binds its atoms together, given that greater stiffness implies a higher binding energy. They then assumed that the latter term could be equated to the Rydberg energy, which is the characteristic binding energy in condensed matter.

Eye-catching formula

The resulting formula proved eye-catching, particularly when expressed in terms of the fine-structure constant – which sets the strength of fundamental electromagnetic interactions  – and then written specifically as an upper limit on the speed of sound. The upper limit comes about when the mass of the atoms in question is the lowest mass possible – that of the hydrogen atom. In that case, the velocity of sound can simply be expressed in terms of the proton-to-electron mass ratio (inverted, halved and square-rooted), the fine-structure constant and the speed of light in a vacuum.

Inserting the relevant numbers into their formula, the researchers worked out that the highest possible speed of sound in solids and liquids (exposed to moderate pressures) comes in at a little over 36,000 m/s. That is still nearly 10,000 times slower than light’s upper limit but about twice as high as the fastest ever recorded sound wave – which stands at around 18,000 m/s and obtained in (very stiff) diamond.

To establish whether their equation is broadly in line with measured velocities, the researchers compared its predictions against the experimentally obtained speed of sound in 36 different elemental solids. To do so they used a log-log plot to show how these speeds vary with the solids’ atomic mass and on the same graph drew the straight sloping line generated by their equation – which terminates at the high end with hydrogen. They conclude that the experimental data points more or less follow their sloping line, even if a coefficient they dropped to simplify their equation means that not all are that close.

Metallic hydrogen

As an additional check on their work, Trachenko’s team used density functional theory to calculate the speed of sound through metallic hydrogen from first principles. When exposed to very high pressures, hydrogen becomes a molecular solid, and at pressures above about 400 GPa it is predicted then to become an atomic metal. It is this metallic state that should hold the speed record. Modelling hydrogen in these conditions, they found that sound should propagate at up to 35,000 m/s – faster than in any other material, but still below their upper limit.

This work follows similar research that Trachenko and fellow group member Vadim Brazhkin published earlier this year, which revealed a universal lower limit on viscosity expressed in terms of the proton-to-electron mass ratio and Planck’s constant. By incorporating the fine-structure constant, they ended up with an expression containing two fundamental constants whose fine tuning yields a stable proton and also allows stars to ignite and produce heavier elements, enabling, as they put it, “the existence of solids and liquids where sound can propagate”.

Kamran Behnia of PSL University in Paris describes the work as “simultaneously simple and deep”. He says he had not expected to be able to work out the rough speed of sound in a particular material “by hand-waving arguments” – particularly, he adds, when the resulting formula involves only fundamental constants. As he points out, quantum mechanics is not needed to explain the propagation of sound – even if quantum mechanics is what makes the solid state possible. “This is why the main message of this paper comes as a surprise,” he says.

The research is described in Science Advances.

Searching for microfibres in the snow, differential equations make the world go round, Isaac Newton first edition found while house clearing

I grew up in Canada, which has two things in abundance: electric clothes dryers and snow. So, I was immediately drawn to a paper in PLOS ONE by Kirsten Kapp and Rachael Miller that quantifies the number of microfibres emitted by tumble dryers by extracting fibres from snow nearby to dryer exhaust vents. Their experiments were done in two snowy locations – Idaho and Vermont – using two different models of domestic dryer.

Kapp and Miller began their experiment with two pink polyester fleece blankets that were soaked and then placed in the dryers. The blankets were not washed to avoid introducing extra variables to the experiment. After the blankets were dried, the surface layer of snow was gathered from a dozen or so different locations near the dryer vents. The snow was then melted, and the fibres counted.

The duo found that the greatest concentration of microfibres were within about 1.5 m of the vents. However, several locations about 9 m from the vents also had significant numbers of microfibres.

There is growing concern about microfibre pollution – and it is already well known that wastewater from clothes washing machines is a major source of microfibres in the environment. “The goal of this study was to break the story about dryers and microfibre emissions, rather than answer all the questions,” says Miller. “Our intent is that these data now inspire researchers to investigate exactly what factors concerning dryer design, installation and settings increase or reduce shedding, inspire the white goods industry to include dryers in their discussions about microfibre pollution, and educate consumers about the potential effects that dryer use has on our environment.”

“I find it hard to think of anything that is more relevant to understanding how the world works than differential equations.” So says the physicist Sabine Hossenfelder in the introduction to her latest video “What are differential equations and how do they work?” which you can watch above.

Although Isaac Newton developed calculus – which underpins differential equations – our modern form of calculus is largely absent from his ground-breaking Mathematical Principles of Natural Philosophy. A rare first edition of the book printed in 1729 has been sold at auction for £22,000. Incredibly, the two-volume edition was discovered during a lockdown clear-out at a house in South Wales.

“It was on their shelves and they were looking for things to sell while they were in lockdown,” says Chris Albury of Dominic Winter Auctioneers in Cirencester, who according to the BBC “almost fell off” his chair when he realized it was “the greatest work of science in the English language”.

Copyright © 2025 by IOP Publishing Ltd and individual contributors