Skip to main content

Micro-CT enables first non-destructive characterization of asthma medicines

3D XCT images

Thirty years ago, over 180 million people worldwide were affected by asthma. Five years ago, the figure was approximately 350 million. It’s estimated that around 400 million people will be afflicted by 2025.

Asthma’s prevalence is increasing. Medicines delivered via inhalers effectively manage many peoples’ symptoms. But we still don’t know much about how an inhaler’s life-altering medicines look and behave at a fundamental level.

A group of UK researchers can now examine the materials in dry powder inhaler medicines more accurately and in more detail than ever before. Using a non-destructive imaging technique called X-ray micro-computed tomography (XCT), they have arrived at the first three-dimensional portrait of materials in dry powder inhaler medicines.

X-ray micro-CT of asthma medicines

Why dig into an asthma medicine’s physical structure? Because a medicine’s behaviour, which is dictated by its physical properties, impacts performance. For example, smaller particles, which can penetrate deep into the lungs, interact strongly with one another within inhalers, making it more difficult for them to aerosolize and reach the lungs to begin with. Other important properties include the particles’ shapes and roughness, and relationships between particles in space.

“Measurements [of these physical properties] are made for all inhaled formulations at some point during [the pharmaceutical] development, characterization and quality control process,” says Darragh Murnane, professor of pharmaceutics at the University of Hertfordshire. “The difference is that with XCT we don’t need to break up a tablet, we don’t need to open a capsule, we don’t need to spray an aerosol. We’re able to look inside a medicine as it’s shipped from the factory.”

Murnane and colleagues used a commercial XCT scanner that operates much like a medical CT system. A radiation source produces a cone of X-rays that hit a sample of the medicine. A shadow image is recorded on a detector, and by rotating the sample, images from different angles are collected and then reconstructed to form a volume image. From this, the researchers can make measurements of particle characteristics on sub-micrometre scales.

Validating the imaging technique

Dry powder inhalers contain a mixture of active ingredients and inert carriers that help disperse the active ingredients when a patient inhales. However, selecting which of these materials to use in the first XCT experiments wasn’t straightforward. Materials had to be large enough to allow the researchers to optimize the imaging and analysis procedures, and strong enough to resist damage in the presence of X-radiation.

Enter the so-called tablet-grade carriers, which contain particles slightly bigger than a dust mite, around 250 µm in diameter.

XCT images revealed the tablet-grade carrier particles’ distinct shapes, while differences in greyscale colours, or contrast, in the images reflected the different particles’ atomic weights (with brighter pixels signifying denser regions).

Even though XCT can study materials without altering or destroying them, the researchers found that their tablet-grade carrier XCT results matched those seen using other characterization techniques. The differences that they did see, such as undisturbed particle orientations and comprehensive views of individual particles, clarified XCT’s advantages.

XCT and SEM images

Satisfied, the researchers moved on to study a class of carriers that are more representative of those found in dry powder inhaler medicines.

Tinier particles, new challenges

Until recently, XCT systems could not distinguish between tiny, lightweight particles and air. Technological advances and X-ray optical lenses helped maximize image contrast by keeping the distances between scanner components and medicines short.

Still, the inhalation-grade carriers presented new challenges for XCT. The particles in these carriers are more complex than those in tablet-grade carriers. They are smaller in size, hovering around 100 µm, about the width of a human hair. They also have a much wider range of sizes, including a proportion of fine particles that are 10 µm or smaller.

“Consider a pixel size of 1 micrometre. A cubic particle with a 10-micrometre edge length would be spanned by 1000 voxels [equivalent to pixels in three-dimensional space], but a 100-micrometre cubic particle would be spanned by one million voxels,” explains Parmesh Gajjar, postdoctoral researcher at the University of Manchester.

That means that the fine particles in inhalation-grade carriers would have fewer voxels per particle in an XCT image than a larger carrier, making it difficult for researchers to identify individual particles. Despite these theoretical concerns, the team successfully separated and characterized individual inhalation-grade carrier particles.

Bolstered by this success, the researchers undertook their greatest challenge yet: imaging a blend of tablet-grade carrier and active ingredient, a mixture intended to mimic those in dry powder inhaler medicines.

Here, the researchers hit a wall. Initially, their XCT images didn’t provide enough contrast for them to separate and measure individual particles. However, by using different algorithms developed by their industry research partners, they were able to separate individual particles in the mixture. They presented these new results at the Digital Respiratory Drug Delivery meeting earlier this year.

“Taking the image, splitting the image, and identifying the different particles from within the image … was the main challenge from my point of view,” says Gajjar. “Because we’re looking at a carrier with a very low atomic weight, it’s hard to get [that] contrast.”

It’s a small world after all: XCT in the future

Asthma inhalers deliver medicine directly to the lungs; however, understanding this process and developing effective medicines is “extremely challenging”, Murnane says.

XCT is the first imaging technology to non-destructively produce 3D images, helping researchers understand how dry powder inhaler medicines behave during manufacture and how they aerosolize for inhalation into the lungs.

Even so, it takes hours for well-trained workers to take and analyse XCT images, and equipment is not yet commonplace. Because of these limitations, the researchers believe XCT might initially be used by a select number of laboratories to help improve the results of more traditional particle characterization techniques. XCT could also help characterize medicines that are under development, when active ingredients are often scarce, examine the relationships between different forms of asthma medicines, such as powders and aerosols, or compare medicines’ generic and name-brand forms.

“I don’t think this work is going to change the types of medicines that we formulate,” Murnane says. “But what it will do is allow us to understand the materials that we use much more robustly.”

The study is published in European Journal of Pharmaceutics and Biopharmaceutics.

Make spinning sprinklers and balloon rockets at home, lidar unveils huge Mayan structure, how to win Come Dine with Me

Are you looking for a fun physics activity to do with the kids this weekend? The Institute of Physics’ Melissa Brobby has just the thing – a self-spinning water sprinkler made from a milk carton.  In the above video, she shows you how to make a sprinkler and tells you about the physics that makes it spin.

The video is part of the Institute’s Do Try This at Home series, which aims to make it easy for parents and carers to get their children excited about physics. If sprinklers are not your cup of tea, how about a balloon rocket – as described below by the Institute’s Mikey Jarrell.

If I had to name a physics-related technique that is most like magic, I would have to say lidar. This involves firing laser pulses at an object or landscape of interest and using the reflected light to create a 3D map of the object or landscape topography. The seemingly magic bit is that lidar can often see through vegetation – revealing what lies on a forest floor, to a lidar system flying overhead. This has proven very useful for archaeologists, who have used lidar to make some spectacular finds.

Now, lidar has revealed the largest and oldest Mayan structure known to archaeologists. The huge rectangular elevated platform was built between 1000–800 BC in Mexico’s Tabasco state and was found by Takeshi Inomata of the University of Arizona and colleagues. The discovery is described in Nature, where the team also explains how it used radiocarbon dating to work out the age of the structure.

Das Perfekte Dinner is a reality TV programme like the UK’s Come Dine with Me, in which people take turns hosting a dinner party for each other throughout a week. Each dinner is scored by the other contestants on the evening that is served.

Physicists Peter Blum and Marc Wenskat at the University of Hamburg have analysed results from the German show and have concluded that a contestant’s chances of winning are boosted if they host their dinner later in the week.

In a preprint uploaded to arXiv, they say that their finding is an example of the “secretary problem” that arises when things are rated consecutively using the same criteria. Apparently, the application of those criteria change as each scoring occurs, skewing the results.

Thermogalvanic hydrogel cools down electronic devices

A new thermogalvanic hydrogel can simultaneously cool down electronic devices and convert the waste heat that they produce into electricity. The material, developed by a team of researchers at Wuhan University in China and the University of California Los Angeles (UCLA) in the US, decreases the temperature of a mobile phone battery by 20 °C and retrieves 5 μW of electricity at fast discharging rates. This reduced working temperature ensures that the battery operates safely, while the amount of electricity harvested is enough to power the hydrogel’s cooling system.

Many electronic devices – including solar cells and light-emitting diodes, as well as phone batteries – generate significant amounts of heat during normal operation. Not only is most of this heat wasted, it can also lead to localized overheating, which decreases the devices’ efficiency and lifespan. In some cases, the excess heat can even cause devices to explode or catch fire.

Traditional ways of recovering waste heat, such as thermoelectric modules, involve adding extra thermal resistance. Unfortunately, this additional resistance prevents heat from dissipating, and thus increases the temperature of the electronic device’s core components. Removing heat tends to consume energy, especially if additional equipment like fans or pumps are required. This apparent conflict means that while researchers have previously succeeded in recovering waste heat from electronic devices, and in efficiently removing it, they have never accomplished both at the same time.

Separate thermodynamic cycles

Thermogalvanic cells, which consist of an electrolyte solution surrounded by two inert electrodes, show promise in reconciling the competing tasks of removing heat and converting it to electricity. In such a cell, electron-transferring (redox) reactions convert heat energy into electricity. Since the solvent in the cell’s electrolyte solution is only present to support ion transport and electron transfer, it can undergo a separate thermodynamic cycle without affecting the heat-to-electricity conversion process. Hence, the water molecules in aqueous electrolytes can be allowed to evaporate and condense, completing a cycle of heat absorption and release that cools down the cell even as the thermal-electric conversion process continues.

A team led by Kang Liu of Wuhan University and Jun Chen of UCLA’s Department of Bioengineering has now developed a hydrogel film to accomplish this task. The hydrogel is based on a polyacrylamide framework infused with potassium, lithium and bromine ions, as well as the ferricyanide ions Fe(CN)63− and Fe(CN)64−.

When heated, the ferricyanide ions transfer electrons between the cell’s electrodes, generating electricity. At the same time, confined water in the hydrogel is allowed to freely evaporate, which removes a large amount of heat without affecting the thermal-electric conversion process. The positive lithium and negative bromine ions serve to control the system’s moisture balance, facilitating water absorption from the surrounding air and thus “regenerating” the hydrogel.

Battery cooling

To show that their new hydrogel film could cool a real-world device, the researchers attached it to a mobile phone battery during fast discharging of 2.2 C (where C is a measure of the rate at which a battery is discharged relative to its maximum capacity). They found that some of the waste heat was converted into 5 μW of electricity and that the temperature of the battery deceased by 20 °C.

The researchers found that their film, which measured around 12 x 30 x 3.6 mm, is robust, with a mechanical strength of 0.24 MPa. They also showed that it can be stretched up to 2–3 times its original length without suffering any damage. Full details of the new thermogalvanic hydrogel are reported in Nano Letters.

Exotic radioactive molecules could reveal physics beyond the Standard Model

The first spectroscopic study of radium monofluoride suggests that the radioactive molecule could be used to perform high-precision tests of the Standard Model of particle physics. The study was done by an international team of physicists working in the ISOLDE lab at CERN and could lead to a new upper limit being placed on the electric dipole moment of the electron – which could help explain why there is much more matter than antimatter in the universe.

Atomic and molecular spectroscopy allows physicists to make extremely precise measurements of some fundamental properties of both electrons and nuclei. As a result, spectroscopy offers a way of determining whether a particle like the electron conforms to the Standard Model of particle physics.

Radium monofluoride is a molecule of particular interest to physicists because in certain isotopic versions of the molecule, the radium nucleus is deeply asymmetrical – having a pear-shaped mass distribution. This, and the high mass of radium, means that it is ideal for studying the fundamental properties of the bound electrons – including whether the electron has an appreciable electric dipole moment.

Time reversal symmetry

It is well known that the electron has a magnetic dipole moment, which is a result of the particle’s “spin”, or intrinsic angular momentum. However, time reversal symmetry – a tenet of the simplest version of the Standard Model – forbids the electron from also having an electric dipole moment.

While more complicated versions of the Standard Model allow the electron to have an extremely small electric dipole moment, measuring a substantially higher value could point towards new physics beyond the Standard Model. This would be of great interest to cosmologists because it would reveal a fundamental symmetry breaking in the early universe that could explain why there is much more matter than antimatter in the cosmos.

The short-lived radium monofluoride molecules were created at ISOLDE, which produces beams of exotic radioactive particles that can be ionized and trapped with electromagnetic fields for further study. The team used the Collinear Resonance Ionization Spectroscopy (CRIS) instrument on ISOLDE, which enabled them to study even very low amounts of particles with high precision.

Short-lived molecules

The results provide the first spectroscopic information of radium monofluoride, including isotopologues – molecules that differ only in their isotopic composition – composed of radium isotopes with half lives as short as a few days.

Writing in Nature, nuclear physicist Ronald Garcia Ruiz and colleagues describe how they determined that the molecules have energy levels that should allow them to be laser cooled to temperatures just above absolute zero. This is a necessary condition to allow for the extremely high-precision measurements needed to find deviations from the Standard Model. Garcia Ruiz, who works at the Massachusetts Institute of Technology (MIT) and CERN, is currently setting up a new collaboration between MIT and ISOLDE, to reignite the quest to measure the electron’s electric dipole moment.

“We want to narrow the gap further between our most sensitive measurements, and the theoretically predicted value of the dipole moment,” says Gerda Neyens, a nuclear physicist at the KU Leuven in Belgium and ISOLDE’s head of research. “The value in the Standard Model is extremely small, and way out of the current experimental range. But by closing in we can already constrain on certain theories that predict much larger values.”

New technique uses food residues to date prehistoric pottery

Tiny traces of food left in ancient clay pots can now be used to date archaeological objects thanks to a novel combination of NMR spectroscopy and accelerator mass spectrometry (AMS). The new technique, developed by researchers at the University of Bristol, UK, overcomes previous challenges associated with dating pottery from the prehistoric era.

All living organisms absorb radioactive 14C from the atmosphere. Once an organism dies, radioactive decay causes the amount of this isotope present in its body to decrease at a known rate. Measuring the residual levels of 14C in an object made from formerly-living materials therefore enables scientists to determine how old it is.

AMS radiocarbon dating is one of the most widely-employed methods for estimating the age of archaeological objects up to 50 000 years old. It is mainly applied to samples of charred plant remains and bone collagen, and its adoption has transformed not only archaeology but also many other research disciplines, including climate science. However, objects primarily made from non-living materials, such as clay pots, are far harder to date using AMS – a distinct disadvantage, since pieces of pottery, or sherds, are among the artefacts most commonly recovered from archaeological sites.

Analysing individual fatty acids in food residues

A team led by Richard Evershed has now overcome this problem by analysing food residues that have been absorbed into (and are thus protected by) the clay matrix in pottery. These residues are typically left behind by cooking meat or milk, and they contain high amounts of palmitic (C16:0) and stearitic (C18:0) fatty acids. Indeed, these carbon-bearing compounds are often present in concentrations as high as milligrams per gram of clay.

The researchers isolated the fatty acids by cleaning sherds of cooking pots and then grinding them to a powder, which opens up the surface where the fatty materials (lipids) are preserved. Next, they extracted the lipids with organic solvents and separated the individual compounds in them using preparative capillary gas chromatography.

AMS-pottery-article

Evershed and his team checked the purity of their compounds using high-field NMR spectroscopy and mass spectrometry techniques before placing them in a tin capsule and burning them to make a graphite target. They then transferred this target to an accelerator mass spectrometer and used the instrument (which the lab acquired in 2016) to count the number of 14C atoms within the target. “Bringing together the state-of-the-art NMR to check the purity of the isolated compound and the latest AMS instrument was the major step forward that made these measurements possible,” Evershed says.

Neolithic vessels

The first samples Evershed’s team studied came from Neolithic vessels unearthed at archaeological sites in Somerset, UK; the World Heritage site of Çatalhöyük in Anatolia, Turkey; Lower Alsace in France; and Takarkori in the Sahara region of southwest Libya. Previous dating efforts based on pottery styles suggested that these vessels range in age from around 5500 to over 8000 years old, and the team found that their measurements agreed with these earlier estimates – at least to within a few decades.

Spurred on by this result, the team then analysed a collection of pottery recently excavated by workers from the Museum of London Archaeology in advance of building works there. These samples, which include 436 fragments from at least 24 different bowls, cups and pots, were thought to come from the Early Neolithic period, but their exact age was unknown.

Radiocarbon measurements on milk fats in these vessel fragments revealed that the pottery was 5500 years old. “The results indicate that the area around what is now Shoreditch High Street was already being used at this time by established farmers who ate cow, sheep or goat dairy products as a central part of their diet,” Evershed explains. “These people were likely to have been linked to the migrant groups who were the first to introduce farming to Britain from Continental Europe around 4000 BCE – just 400 years earlier.”

First technique to directly date pottery

Pottery was invented in the late Pleistocene epoch (2.6 million years BCE – 9700 BCE) and was an important factor in the development of food processing, Evershed says. Current techniques for dating it often depend on a careful examination of the object’s style and decorative features. However, this “typology” method provides only a crude estimate of a pot’s age – especially for earlier samples, which tend to be rather plain. Another method, known as association, relies on carbon-dating organic materials (such as charcoal, bone or seed) found lying next to the pottery and using the results to estimate the age of the pot itself. Again, this method is not very accurate, and is prone to errors.

The best method, Evershed says, is to date something from the pots themselves. Other researchers have previously done just that, by dating carbonized materials on the artefacts’ surfaces. However, Evershed notes that these residues are more easily contaminated by their environment than residues embedded within the clay.

“Our new technique is the first to date pottery directly,” he tells Physics World. “And since we are looking at fat residues, we can connect the date of these to food procurement practices. For example, dating milk residues form prehistoric sites gives us an insight into early animal management practices.”

Anchoring “floating chronologies”

Members of the Bristol team, who report their work in Nature, now plan to investigate whether they can apply the technique to other fatty residues such as plant oils or beeswax. Another possibility would be to use it to date mummies, which contain lipids in their soft tissues and bandaging. A third application might involve dating archaeological bones, as these also contain lipids.

“The importance of this advance to the archaeological community cannot be overstated,” Evershed says. While pottery typology makes it possible to arrange objects in chronological order according to their decorative characteristics, he points out that these chronologies are hard to pin down to specific calendar dates. “Our technique allows us to anchor these ‘floating chronologies’ in real calendric time and assign a proper date to them,” he says.

US scientific societies condemn racism in the wake of George Floyd death

US scientific societies, universities and technology companies have reacted strongly to the death of African American George Floyd, who was killed on 25 May by a Minneapolis policeman. Responding to Floyd’s death and other African Americans who have died at the hands of police, science-based organizations have condemned injustice, systemic racism and lack of opportunity for minority members in science and the broader community. Yet some have criticized the society statements as coming too little, too late, and without the support of positive action to counter the abuses.

Floyd was arrested on 25 May suspected of carrying a counterfeit $20 bill. Handcuffed and lying face down on the street, a police officer pressed his knee to Floyd’s neck for almost nine minutes, killing him in the process. Following the death, protests and demonstrations were held across the US against systemic racism, excessive use of police force and lack of accountability for police officers.

Racism persists because many of us have refused to see it

Megan Donahue

In a letter to the membership of the National Society of Black Physicists (NSBP), president and Brown University physicist Stephon Alexander noted that racism “poisons law enforcement and can poison the scientific enterprise”. Alexander added that the loss of innocent lives “at the hand of those who are supposed to protect lives deepens our fear and isolation” and encouraged donations to the NSBP, which has supported black physics students and professionals for four decades.

Among the scientific organizations to have added their voice in support of the black community is the American Physical Society (APS). It said in a statement that while systemic racism and racial injustice persist around the world, it was “especially concerned for colleagues of colour and their families”.

APS’s current president-elect and former NSBP president James Gates told Physics World that the society felt it important to issue a statement because “the attitudes are as toxic to democracy and science as poison to a body”. He added that the APS values diversity, equity, inclusion, and respect and that they will work “to take actions that support these values”.

Taking action

However, Fermilab and University of Chicago astrophysicist Brian Nord questioned the APS’s response. “On what planet does APS think that it has convinced Black people of any commitment to justice,” he said  on Twitter. “Will there be a letter discussing any real action? No suggestions about concrete things people can do right now?” he asked in other Tweets.

Some admit that the scientific community must take some responsibility for continuing racial problems. “Racism persists because many of us have refused to see it,” astrophysicist Megan Donahue from Michigan State University, who is also president of the American Astronomical Society, said in a statement. “Dispersed across the world, many of us isolated at home, we can barely comprehend the tragedies unfolding across the country and the world, but we know that the trauma of these tragedies will be felt most acutely by the marginalized.”

Several organizations have, however, begun specific efforts to help remedy the situation. The American Meteorological Society, for example, has created a “culture and inclusion cabinet”. AMS president Mary Glackin, who is also a vice-president of The Weather Company, noted in a statement that the society “acknowledges the pain our Black and African American community members are experiencing and hope our solidarity relieves a part of the weight of that path”. Sudip Parikh, chief executive of the American Association for the Advancement of Science, meanwhile, has called for the association to “recommit to systemic change and such relevant principles as ensuring diversity, equity and inclusion”.

Universities and companies have also joined in the calls to change. Cornell University president Martha Pollack says that her university will “address this scourge of racism directly in our educational programmes, in our research and in our engagement and related activities”. Meanwhile, medical-device company Boston Scientific has pledged to “increase talks with employee resource groups around the world and ask employees to act when they experience or witness intolerance, mistreatment or bias”.

Scientists have also started to promote their own ideas. Sarafina Nance, a graduate student in theoretical astrophysics at the University of California, Berkeley, has compiled some anti-racism resources. And in an article in Forbes, astrophysicist and science writer Ethan Siegel suggests steps that academics can take to “play a major role in transforming science and academia into a safer, more inclusive environment.” Those include recognizing that black students face challenges beyond those that other students face; actively creating a welcoming, supportive environment; and learning how to be inclusive oneself rather than putting the onus on black colleagues.

Why the SpaceX/NASA launch is so important, how to be a successful physics YouTuber

On Saturday 30 May space enthusiasts around the world held their breath as the first commercially built rocket to carry people into orbit lifted off from Florida. On board were two American astronauts heading for the International Space Station – and this joint mission involving NASA and SpaceX has thus far been a success.

In this episode of the Physics World Weekly podcast, aeronautical engineer Steve Bullock and space enthusiast Andrew Glester explain why this launch is an important milestone and what it means for the future of space travel.

Video offers a unique opportunity for physicists to present complex ideas in a way that is accessible to a wide range of people. In this episode, the theoretical physicist and YouTuber Sabine Hossenfelder talks about her YouTube channel “Science without the gobbledygook” – and gives some top tips about how to make physics videos for a general audience, including some advice about using a green screen.

Machine learning teases out differences in high-pressure ice phases

Although nearly all ice found on Earth has a hexagonal structure, at least 17 types of ice are known to exist, each with a different molecular arrangement. Most of these novel variants, however, require high pressures and controlled temperature environments to form, making them difficult to study directly. A team of researchers in China has now used a first-principles neural network potential technique to discriminate between several high-pressure water phases. Their findings add to our understanding of the proton transfer mechanisms involved when these phases melt and could prove important for planetary science as well as fundamental physics and chemistry.

Water is unique in forming a wide variety of different crystalline and amorphous ice structures. Its unusual behaviour in the frozen state stems in part from the weak intermolecular bonds between its two hydrogen atoms, which are separated by a single atom of oxygen.

One of the most-studied alternative forms of ice is known as ice VII. This exotic body-centred cubic (bcc) crystal phase is also known as “hot ice”, and it can form at ambient temperatures under pressures above 3 GPa (about 30 000 times atmospheric pressure at sea level). Ice in this form has been theorized to exist in cold subduction zones within the Earth’s crust, and on Saturn’s icy moon Titan.

phasediagram

Another novel ice phase, known as superionic ice or ice XVIII, exists at even higher temperatures and pressures of 1000 kelvin and 40 GPa. This form of frozen water contains liquid-like hydrogen ions – that is, protons – that quickly diffuse through a solid lattice of oxygen atoms. Superionic ice could make up a large fraction of the interiors of the planets Uranus and Neptune, with the fast-diffusing protons helping to generate the strong and complex magnetic fields characteristic of these “ice giant” planets.

Ice VII to superionic ice

In 2005, researchers confirmed experimentally that ice VII can transform into superionic ice at around 47 GPa and 1000 K. They also suggested that the transformation takes place via a “dynamic” form of ice VII in which the protons are more disordered than in conventional ice VII, while still being more localized than they are in superionic ice. This transition, however, proved difficult to simulate using traditional ab initio calculations and force field molecular modelling methods. This is because ab initio methods are normally restricted to short time periods and relatively small supercell simulations, whilst force field-based techniques cannot address the chemical bond breaking associated with proton diffusion in dynamic ice VII.

Xin-Zheng Li of Peking University in Beijing and colleagues believe they have now overcome this problem by applying an alternative modelling technique based on neural network potentials. Their model used a popular deep-learning package (the DeePMD-kit) to create large-scale simulations that are as accurate as density-functional theory (DFT) calculations and require about the same amount of computing time and power.

The results of these simulations enabled the researchers to explore the nature of high-pressure water phases at an atomic level. They found that dynamic features, such as the diffusive motion of hydrogen and oxygen atoms, are indispensable for distinguishing between several subtle and “non-trivial” ice phases.

Detailed phase diagram

In dynamic ice VII, for example, Li’s team identified two subtle phases, which they dubbed dynamic ice VII T and dynamic ice VII R. The former incorporates local motion and transversal transfer of protons while the latter also accounts for their rotational transfer. They also interpreted the superionic phase as involving the non-trivial melting of ice VII at pressures above 40 GPa.

According to Li, the transition from ice VII to superionic ice can be understood as occurring when oxygen and hydrogen atoms simultaneously escape from the crystal sites in ice VII at moderate pressures just as the solid structure suddenly breaks down. At higher pressures (above 40 GPa), the hydrogen atoms melt first, followed by the oxygen atoms.

Based on these results, the researchers have drawn up a detailed phase diagram (see image above), which they say will be useful for understanding how water behaves under high pressures. “The method to detail this diagram could be extended to other materials – even those that are thought to be relatively well understood,” Li tells Physics World.

The research is detailed in Chinese Physics Letters, which is published by IOPP.

Neutron stars may contain free quarks

A long-standing debate about what lies at the heart of neutron stars might soon be cleared up, if a new analysis reconciling observational data and theory is vindicated. The latest research, carried out by physicists in Europe and the US, concludes that massive neutron stars are likely to have free quarks in their core rather than being entirely composed of neutrons and other non-fundamental particles. If such extremely dense cores exist, the researchers say that their presence may leave telltale traces in gravitational-wave data from merging neutron stars.

Quarks are normally confined inside protons and neutrons, but they can exist as individual particles if the energy density is high enough. Scientists know this because researchers working on experiments at the CERN laboratory in Switzerland and the Brookhaven National Laboratory in the US have collided heavy ions to generate what is known as a quark-gluon plasma – a “soup” of free quarks and strong force-carrying gluons that is thought to have existed for a few milliseconds after the Big Bang.

It is possible that quarks could also break free from their confined states within the cores of neutron stars. These extremely dense objects form when giant stars collapse and shed most of their material in a supernova explosion. Physicists are confident that neutron stars contain a variety of elements in their outer layers and individual neutrons further in. But they are not sure what exists in the core, and whether neutrons remain intact or break down into their constituent quarks and gluons.

Although energy densities in the core are comparable to those generated by heavy-ion collisions, the “quark matter” that would be produced is quite different from the quark-gluon plasma recreated in laboratories – cooler, but far more dense. Unfortunately, the computational scheme used to simulate quark-gluon plasma, known as lattice quantum chromodynamics (lattice QCD), is unable to model the cold, matter-heavy interiors of neutron stars.

Model-independent analysis

To get around this problem, Eemeli Annala of the University of Helsinki in Finland and colleagues carried out a model-independent analysis of astrophysical data and theoretical calculations. In a paper published in Nature Physics, they point out that nuclear theory can describe fairly precisely how the pressure experienced by protons, neutrons and other quark-containing hadrons varies with energy density in the relatively un-dense environment of a neutron star’s crust. Similarly, they say, QCD can be used to calculate this pressure variation – known as an equation of state – at very high densities.

The real challenge is to work out what goes on between these two extremes, since it is here that neutron-star cores lie. Annala and colleagues took the approach of plotting the variation of a vast ensemble of functions used to represent a neutron star’s equation of state across the full range of energy densities. They tried to be as unbiased as possible in selecting the functions, varying only the maximum speed of sound through the neutron-star matter and anchoring the plots using two empirical constraints: that neutron stars weigh as much as 1.97 solar masses; and that the tidal distortion of a 1.4-solar mass neutron star matches observed values.

The researchers found that their modelled equations of state agreed fairly well with those of standard hadronic theory for lower-mass neutron stars – confirming, they say, that the gravitational fields in these bodies are not high enough to rip neutrons and protons apart. In the most massive neutron stars, however, they found very little agreement between the two sets of modelled data. As such, they conclude that quark cores in massive neutron stars should be considered “the standard scenario, not an exotic alternative”.

As group member Aleksi Kurkela of the CERN laboratory in Switzerland explains, the very few points that do agree would require sound to zip through neutron stars at velocities of at least 90% the speed of light. But he argues that this is very unlikely, simply because scientists know of no physical system that could support such speeds. “Having matter with such a high speed of sound would be truly remarkable,” he says.

Indeed, Kurkela adds that more typical speeds imply that the quark cores are relatively large. By stipulating that sound waves travel no quicker than about half the speed of light, he and his colleagues predict that a 24-km diameter neutron star would have a quark core some 13 km across.

Confirmation still needed

The researchers reckon that their predictions could be put to the test. Shock waves reflecting off the edge of a very dense quark-matter core could, they argue, leave an imprint in the gravitational waves generated when neutron stars merge.

However, not everyone is convinced. James Lattimer, an astrophysicist at Stony Brook University in the US, thinks it likely that quark matter exists inside massive neutron stars. But he maintains that the study relies on too much interpolation – some two orders of magnitude in energy density – to draw firm conclusions. He also notes that the physical conditions assumed by the authors to yield quark-matter cores have been shown by other researchers to produce cores containing only hadronic matter.

Laura Paulucci, an astrophysicist at the Federal University of ABC in Brazil, largely agrees. She argues that while the latest research indicates that quark cores are likely to exist, it falls short of providing direct evidence. “It is difficult to say how long it will take until we have a clearer picture and a definite answer,” she says. “I hope it is not long.”

AI-reconstructed medical images can’t be trusted

Image reconstruction

Medical images reconstructed using artificial intelligence (AI) techniques are unreliable, according to recent research by an international team of mathematicians. The team found that deep learning tools that create high-quality images from short scan times produce multiple alterations and artefacts in the data that could affect diagnosis. These issues were found in multiple systems, suggesting the phenomenon will not be easy to fix.

Cutting medical scan time could reduce costs and allow more scans to be performed. To enable this, some researchers have developed AI systems that construct high-quality images from low-resolution scans. The medical imaging equipment samples fewer data points than would normally be required and the AI enhances these data to create a high-resolution image. The AI trains on previous datasets from high-quality images. This is a radical shift from classical reconstruction techniques based on mathematical theory, which do not learn or rely on previous data.

A study published in the Proceedings of the National Academy of Sciences, however, finds that these AI algorithms have serious instability issues. Small structural changes, such as the presence of a small tumour, may not be captured, while tiny, almost undetectable perturbations, like those created by patient movement, can lead to severe artefacts in the final image.

The team, led by Anders Hansen at the University of Cambridge, tested six different neural networks trained to create enhanced images from MRI or CT scans. The researchers fed the networks data designed to replicate three possible issues: tiny perturbations; small structural changes; and changes in the sampling rate compared with the data on which the AI was trained.

Tiny perturbations can be generated by factors such as the patient shifting, white noise-like issues from the scanner and small anatomic differences between people, the researchers say. Such issues created multiple different artefacts and instabilities in the AI systems.

“What we show is that a tiny perturbation that is so small that you can’t even see it with your eyes can suddenly make a change so that there is now a new thing that appears in the image, or something that is removed,” Hansen explains. “So, you can get false positives and false negatives.”

To test the ability of the systems to detect small structural changes the team added letters and symbols from playing cards to the images. One of the networks was able to reconstruct these details, but the other five presented issues ranging from blurring to almost complete removal of the changes.

Only one of the neural networks produced better images as the researchers increased the sampling rate of the scans. Another stagnated, with no improvement in quality, while in three, the reconstructions dropped in quality as the number of samples increased. The sixth AI system does not allow the sampling rate to be changed.

Hansen says that researchers need to start testing the stability of these systems. “What they will see on a large scale is that many of these AI systems are unstable,” he explains. The “big, big problem”, according to Hansen, is that there is no mathematical understanding of how these AI systems work. “They become a black box and if you don’t test these things properly you can have completely disastrous outcomes.”

Similar instabilities have also been highlighted in deep-learning tools that classify images. “You take a tiny little perturbation and the AI systems says the image of the cat is suddenly a fire truck,” Hansen explains. He says that you can now imagine a system where you use an unstable AI to classify a medical image that has been reconstructed by another unstable neural network. “You are now going to decide do you have cancer or not? The question is, would you like to try it?” he asks.

Hansen believes that these reconstruction techniques do have potential, but there are things that machine learning will not be able to figure out. “What is absolutely crucial is to understand the limitations,” he explains.

Such techniques are not yet being used clinically. The team say that they created the tests as they do not want them to be approved by regulatory bodies unless they have been thoroughly tested.

Copyright © 2025 by IOP Publishing Ltd and individual contributors