Skip to main content

New project takes aim at theory-experiment gap in materials data

Condensed-matter physics and materials science have a silo problem. Although researchers in these fields have access to vast amounts of data – from experimental records of crystal structures and conditions for synthesizing specific materials to theoretical calculations of electron band structures and topological properties – these datasets are often fragmented. Integrating experimental and theoretical data is a particularly significant challenge.

Researchers at the Beijing National Laboratory for Condensed Matter Physics and the Institute of Physics (IOP) of the Chinese Academy of Sciences (CAS) recently decided to address this challenge. Their new platform, MaterialsGalaxy, unifies data from experiment, computation and scientific literature, making it easier for scientists to identify previously hidden relationships between a material’s structure and its properties. In the longer term, their goal is to establish a “closed loop” in which experimental results validate theory and theoretical calculations guide experiments, accelerating the discovery of new materials by leveraging modern artificial intelligence (AI) techniques.

Physics World spoke to team co-leader Quansheng Wu to learn more about this new tool and how it can benefit the materials research community.

How does MaterialsGalaxy work?

The platform works by taking the atomic structure of materials and mathematically mapping it into a vast, multidimensional vector space. To do this, every material – regardless of whether its structure is known from experiment, from a theoretical calculation or from simulation – must first be converted into a unique structural vector that acts like a “fingerprint” for the material.

Then, when a MaterialsGalaxy user focuses on a material, the system automatically identifies its nearest neighbors in this vector space. This allows users to align heterogeneous data – for example, linking a synthesized crystal in one database with its calculated topological properties in another – even when different data sources define the material slightly differently.

The vector-based approach also enables the system to recommend “nearest neighbour” materials (analogs) to fill knowledge gaps, effectively guiding researchers from known data into unexplored territories. It does this by performing real-time vector similarity searches to dynamically link relevant experimental records, theoretical calculations and literature information. The result is a comprehensive profile for the material.

Where does data for MaterialsGalaxy come from?

We aggregated data from three primary channels: public databases; our institute’s own high-quality internal experimental records (known as the MatElab platform); and the scientific literature. All data underwent rigorous standardization using tools such as the pymatgen (Python Materials Genomics) materials analysis code and the spglib crystal structure library to ensure consistent definitions for crystal structures and physical properties.

Who were your collaborators on this project?

This project is a multi-disciplinary effort involving a close-knit collaboration among several research groups at the IOP, CAS and other leading institutions. My colleague Hongming Weng and I supervised the core development and design under the strategic guidance of Zhong Fang, while Tiannian Zhu (the lead author of our Chinese Physics B paper about MaterialsGalaxy) led the development of the platform’s architecture and core algorithms, as well as its technical implementation.

We enhanced the platform’s capabilities by integrating several previously published AI-driven tools developed by other team members. For example, Caiyuan Ye contributed the Con-CDVAE model for advanced crystal structure generation, while Jiaxuan Liu contributed VASPilot, which automates and streamlines first-principles calculations. Meanwhile, Qi Li contributed PXRDGen, a tool for simulating and generating powder X-ray diffraction patterns.

Finally, much of the richness of MaterialsGalaxy stems from the high-quality data it contains. This came from numerous collaborators, including Weng (who contributed the comprehensive topological materials database, Materiae), Youguo Shi (single-crystal growth), Shifeng Jin (crystal structure and diffraction), Jinbo Pan (layered materials), Qingbo Yan (2D ferroelectric materials), Yong Xu (nonlinear optical materials), and Xingqiu Chen (topological phonons). My own contribution was a library of AI-generated crystal structures produced by the Con-CDVAE model.

What does MaterialsGalaxy enable scientists to do that they couldn’t do before?

One major benefit is that it prevents researchers from becoming stalled when data for a specific material is missing. By leveraging the tool’s “structural analogs” feature, they can look to the properties or growth paths of similar materials for insights – a capability not available in traditional, isolated databases.

We also hope that MaterialsGalaxy will offer a bridge between theory and experiment. Traditionally, experimentalists tend to consult the Inorganic Crystal Structure Database while theorists check the Materials Project. Now, they can view the entire lifecycle of a material – from how to grow a single crystal (experiment) to its topological invariants (theory) – on a single platform.

Beyond querying known materials, MaterialsGalaxy also allows researchers to use integrated generative AI models to create new structures. These can be immediately compared against the known database to assess synthesis feasibility and potential performance throughout the “vertical comparison” workflow.

What do you plan to do next?

We’re focusing on enhancing the depth and breadth of the tool’s data fusion. For example, we plan to develop representations based on graph neural networks (GNNs) to better handle experimental data that may contain defects or disorder, thereby improving matching accuracy.

We’re also interested in moving beyond crystal structure by introducing multi-modal anchors such as electronic band structures, X-ray diffraction (XRD) patterns and spectroscopic data. To do this, we plan to utilize technologies derived from computational linguistics and information processing (CLIP) to enable cross-modal retrieval, for example searching for theoretical band data by uploading an experimental XRD pattern.

Separately, we want to continue to expand our experimental data coverage, specifically targeting synthesis recipes and “failed” experimental records, which are crucial for training the next generation of “AI-enabled” scientists. Ultimately, we plan to connect an even wider array of databases, establishing robust links between them to realize a true Materials Galaxy of interconnected knowledge.

The pros and cons of patenting

For any company or business, it’s important to recognize and protect intellectual property (IP). In the case of novel inventions, which can include machines, processes and even medicines, a patent offers IP protection and lets firms control how those inventions are used. Patents, which in most countries can be granted for up to 20 years, give the owner exclusive rights so that others can’t directly copy the creation. A patent essentially prevents others from making, using or selling your invention.

But there are more reasons for holding a patent than IP protection alone. In particular, patents go some way to protecting the investment that may have been necessary to generate the IP in the first place, such as the cost of R&D facilities, materials, labour and expertise. Those factors need to be considered when you’re deciding if patenting is the right approach or not.

Patents are tangible assets that can be sold to other businesses or licensed for royalties to provide your compay with regular income

Patents are in effect a form of currency. Counting as tangible assets that add to the overall value of a company, they can be sold to other businesses or licensed for royalties to provide regular income. Some companies, in fact, build up or acquire significant patent portfolios, which can be used for bargaining with competitors, potentially leading to cross-licensing agreements where both parties agree to use each other’s technology.

Patents also say something about the competitive edge of a company, by demonstrating technical expertise and market position through the control of a specific technology. Essentially, patents give credibility to a company’s claims of its technical know-how: a patent shows investors that a firm has a unique, protected asset, making the business more appealing and attractive to further investment.

However, it’s not all one-way traffic and there are obligations on the part of the patentee. Firstly, a patent holder has to reveal to the world exactly how their invention works. Governments favour this kind of public disclosure as it encourages broader participation in innovation. The downside is that whilst your competitors cannot directly copy you, they can enhance and improve upon your invention, provided those changes aren’t covered by the original patent.

It’s also worth bearing in mind that a patent holder is responsible for patent enforcement and any ensuing litigation; a patent office will not do this for you. So you’ll have to monitor what your competitors are up to and decide on what course of action to take if you suspect your patent’s been infringed. Trouble is, it can sometimes be hard to prove or disprove an infringement – and getting the lawyers in can be expensive, even if you win.

Money talks

Probably the biggest consideration of all is the cost and time involved in making a patent application. Filing a patent requires a rigorous understanding of “prior art” – the existing body of relevant knowledge on which novelty is judged. You’ll therefore need to do a lot of work finding out about relevant established patents, any published research and journal articles, along with products or processes publicly disclosed before the patent’s filing date.

Before it can be filed with a patent office, a patent needs to be written as a legal description, which includes all the legwork like an abstract, background, detailed specifications, drawings and claims of the invention. Once filed, an expert in the relevant technical field will be assigned to assess the worth of the claim; this examiner must be satisfied that the application is both unique and “non-obvious” before it’s granted.

Even when the invention is judged to be technically novel, in order to be non-obvious, it must also involve an “inventive step” that would not be obvious to a person with “ordinary skill” in that technical field at the time of filing. The assessment phase can result in significant to-ing and fro-ing between the examiner and the applicant to determine exactly what is patentable. If insufficient evidence is found, the patent application will be refused.

Patents are only ever granted in a particular country or region, such as Europe, and the application process has to be repeated for each new place (although the information required is usually pretty similar). Translations may be required for some countries, there are fees for each application and, even if a patent is granted, you have to pay an additional annual bill to maintain the patent (which in the UK rises year on year).

Patents can take years to process, which is why many companies pay specialized firms to support their applications

Patent applications, in other words, can be expensive and can take years to process. That’s why many companies pay specialized firms to support their patent applications. Those firms employ patent attorneys – legal experts with a technical background who help inventors and companies manage their IP rights by drafting patent applications, navigating patent office procedures and advising on IP strategy. Attorneys can also represent their clients in disputes or licensing deals, thereby acting as a crucial bridge between science/engineering and law.

Perspiration and aspiration

It’s impossible to write about patents without mentioning the impact that Thomas Edison had as an inventor. During the 20th century, he became the world’s most prolific inventor with a staggering 1093 US patents granted in his lifetime. This monumental achievement remained unsurpassed until 2003, when it was overtaken by the Japanese inventor Shunpei Yamazaki and, more recently, by the Australian “patent titan” Kia Silverbrook in 2008.

Edison clearly saw there was a lot of value in patents, but how did he achieve so much? His approach was grounded in systematic problem solving, which he accomplished through his Menlo Park lab in New Jersey. Dedicated to technological development and invention, it was effectively the world’s first corporate R&D lab. And whilst Edison’s name appeared on all the patents, they were often primarily the work of his staff; he was effectively being credited for inventions made by his employees.

I have a love–hate relationship with patents or at least the process of obtaining them

I will be honest; I have a love–hate relationship with patents or at least the process of obtaining them. As a scientist or engineer, it’s easy to think all the hard work is getting an invention over the line, slogging your guts out in the lab. But applying for a patent can be just as expensive and time-consuming, which is why you need to be clear on what and when to patent. Even Edison grew tired of being hailed a genius, stating that his success was “1% inspiration and 99% perspiration”.

Still, without the sweat of patents, your success might be all but 99% aspiration.

Practical impurity analysis for biogas producers

Biogas is a renewable energy source formed when bacteria break down organic materials such as food waste, plant matter, and landfill waste in an oxygen‑free (anaerobic) process. It contains methane and carbon dioxide, along with trace amounts of impurities. Because of its high methane content, biogas can be used to generate electricity and heat, or to power vehicles. It can also be upgraded to almost pure methane, known as biomethane, which can directly replace natural fossil gas.

Strict rules apply to the amount of impurities allowed in biogas and biomethane, as these contaminants can damage engines, turbines, and catalysts during upgrading or combustion. EN 16723 is the European standard that sets maximum allowable levels of siloxanes and sulfur‑containing compounds for biomethane injected into the natural gas grid or used as vehicle fuel. These limits are extremely low, meaning highly sensitive analytical techniques are required. However, most biogas plants do not have the advanced equipment needed to measure these impurities accurately.

Researchers from the Paul Scherrer Institute, Switzerland: Julian Indlekofer (left) and Ayush Agarwal (right), with the Liquid Quench Sampling System

The researchers developed a new, simpler method to sample and analyse biogas using GC‑ICP‑MS. Gas chromatography (GC) separates chemical compounds in a gas mixture based on how quickly they travel through a column. Inductively Coupled Plasma Mass Spectrometry (ICP‑MS) then detects the elements within those compounds at very low concentrations. Crucially, this combined method can measure both siloxanes and sulfur compounds simultaneously. It avoids matrix effects that can limit other detectors and cause biased or ambiguous results. It also achieves the very low detection limits required by EN 16723.

The sampling approach and centralized measurement enables biogas plants to meet regulatory standards using an efficient, less complex, and more cost‑effective method with fewer errors. Overall, this research provides a practical, high‑accuracy tool that makes reliable biogas impurity monitoring accessible to plants of all sizes, strengthening biomethane quality, protecting infrastructure, and accelerating the transition to cleaner energy systems.

Read the full article

Sampling to analysis: simultaneous quantification of siloxanes and sulfur compounds in biogas for cleaner energy

Ayush Agarwal et al 2026 Prog. Energy 8 015001

Do you want to learn more about this topic?

Household biogas technology in the cold climate of low-income countries: a review of sustainable technologies for accelerating biogas generation Sunil Prasad Lohani et al. (2024)

Cavity-based X-ray laser delivers high-quality pulses

Physicists in Germany have created a new type of X-ray laser that uses a resonator cavity to improve the output of a conventional X-ray free electron laser (XFEL). Their proof-of-concept design delivers X-ray pulses that are more monochromatic and coherent than those from existing XFELs.

In recent decades, XFELs have delivered pulses of monochromatic and coherent X-rays for a wide range of science including physics, chemistry, biology and materials science.

Despite their name, XFELs do not work like conventional lasers. In particular, there is no gain medium or resonator cavity. Instead, XFELs rely on the fact that when a free electron is accelerated, it will emit electromagnetic radiation. In an XFEL, pulses of high-energy electrons are sent through an undulator, which deflects the electrons back and forth. These wiggling electrons radiate X-rays at a specific energy. As the X-rays and electrons travel along the undulator, they interact in such a way that the emitted X-ray pulse has a high degree of coherence.

While these XFELs have proven very useful, they do not deliver radiation that is as monochromatic or as coherent as radiation from conventional lasers. One reason why conventional lasers perform better is that the radiation is reflected back and forth many times in a mirrored cavity that is tuned to resonate at a specific frequency – whereas XFEL radiation only makes one pass through an undulator.

Practical X-ray cavities, however, are difficult to create. This is because X-rays penetrate deep into materials, where they are usually absorbed – making reflection with conventional mirrors impossible.

Crucial overlap

Now, researchers working at the European XFEL at DESY in Germany have created a proof-of-concept hybrid system that places an undulator within a mirrored resonator cavity. X-ray pulses that are created in the undulator are directed at a downstream mirror and reflected back to a mirror upstream of the undulator. The X-ray pulses are then reflected back downstream through the undulator. Crucially, a returning X-ray pulse overlaps with a subsequent electron pulse in the undulator, amplifying the X-ray pulse. As a result, the X-ray pulses circulating within the cavity quickly become more monochromatic and more coherent than pulses created by an undulator alone.

The team solved the mirror challenge by using diamond crystals that achieve the Bragg reflection of X-rays with a specific frequency. These are used at either end of the cavity in conjunction with Kirkpatrick–Baez mirrors, which help focus the reflected X-rays back into the cavity.

Some of the X-ray radiation circulating in the cavity is allowed to escape downstream, providing a beam of monochromatic and coherent X-ray pulses. They have called their system X-ray Free-Electron Laser Oscillator (XFELO). The cavity is about 66 m long.

Narrow frequency range

DESY accelerator scientist Patrick Rauer explains, “With every round trip, the noise in the X-ray pulse gets less and the concentrated light more defined”. Rauer pioneered the design of the cavity in his PhD work and is now the DESY lead on its implementation. “It gets more stable and you start to see this single, clear frequency – this spike.” Indeed, the frequency width of XFELO X-ray pulses is about 1% that of pulses that are created by the undulators alone

Ensuring the overlap of electron and X-pulses within the cavity was also a significant challenge. This required a high degree of stability within the accelerator that provides electron pulses to XFELO. “It took years to bring the accelerator to that state, which is now unique in the world of high-repetition-rate accelerators”, explains Rauer.

Team member Harald Sinn says, “The successful demonstration shows that the resonator principle is practical to implement”. Sinn is head of  European XFEL’s instrumentation department and he adds, “In comparison with methods used up to now, it delivers X-ray pulses with a very narrow wavelength as well as a much higher stability and coherence.”

The team will now work towards improving the stability of XFELO so that in the future it can be used to do experiments by European XFEL’s research community.

XFELO is described in Nature.

The physics of an unethical daycare model that uses illness to maximize profits

When I had two kids going through daycare, or nursery as we call it in the UK, every day seemed like a constant fight with germs and illness. After all, at such a young age kids still have a developing immune system and are not exactly hot on personal hygiene.

That same dilemma faced mathematician Lauren Smith from the University of Auckland. She has two children at a “wonderful daycare centre” who often fall ill. As many parents juggling work and parenting will understand, Smith is frequently faced with the issue of whether her kids are well enough to attend daycare.

Smith then thought about how an unethical daycare centre might take advantage of this to maximize its profits – under the assumption that if there are not enough children attending (who still pay) then staff get sent home without pay, and also don’t get sick pay themselves.

“It occurred to me that a sick kid attending daycare could actually be financially beneficial to the centre, while clearly being a detriment to the wellbeing of the other children as well as the staff and the broader community,” Smith told Physics World.

For a hypothetical daycare centre that is solely focused on making as much money as possible, Smith realized that full attendance of sick children is not optimal financially as this requires maximal staffing at all times, whereas zero attendance of sick children does not give an opportunity for the disease to spread such that other children are then sent home.

But in between these two extremes, Smith thought there should be an optimal attendance rate so that the disease is still able to spread and some children – and staff – are sent home. “As a mathematician I knew I had the tools to find it,” adds Smith.

Model behaviour

Using the so-called Susceptible-Infected-Recovered model for 100 children, a teacher to child ratio of 1:6 and a recovery rate from illness of 10 days, Smith found that the more infectious the disease, the lower the optimal attendance rate for sick children is, and so the more savings the unethical daycare centre can make.

In other words, the more infectious a disease, fewer ill children are required to attend to spread it around, and so can keep more of them – and importantly staff – at home while still making sure it still spreads to non-infected kids.

For a measles outbreak with a basic reproductive number of 12-18, for example, the model resulted in a potential staff saving of 90 working days, whereas for seasonal flu with a basic reproductive rate of 1.2 to 1.3, the potential staff savings is 4.4 days.

Smith writes in the paper that the work is “not intended as a recipe for unethical daycare centre” but is rather to illustrate the financial incentive that exists for daycare centres to propagate diseases among children, which would lead to more infections of at-risk populations in the wider community.

“I hope that as well as being an interesting topic, it can show that mathematics itself is interesting and is useful for describing the real world,” adds Smith.

Saving the Titanic: the science of icebergs and unsinkable ships

When the Titanic was built, her owners famously described her as “unsinkable”. A few days into her maiden voyage, an iceberg in the North Atlantic famously proved them wrong. But what if we could make ships that really are unsinkable? And what if we could predict exactly how long a hazardous iceberg will last before it melts?

These are the premises of two separate papers published independently this week by Chunlei Guo and colleagues at the University of Rochester, and by Daisuke Noto and Hugo N Ulloa of the University of Pennsylvania, both in the US. The Rochester group’s paper, which appears in Advanced Functional Materials, describes how applying a superhydrophobic coating to an open-ended metallic tube can make it literally unsinkable – a claim supported by extensive tests in a water tank. Noto and Ulloa’s research, which they describe in Science Advances, likewise involved a water tank. Theirs, however, was equipped with cameras, lasers and thermochromic liquid crystals that enabled them to track a freely floating miniature iceberg as it melted.

Imagine a spherical iceberg

Each study is surprising in its own way. For the iceberg paper, arguably the biggest surprise is that no-one had ever done such experiments before. After all, water and ice are readily available. Fancy tanks, lasers, cameras and temperature-sensitive crystals are less so, yet surely someone, somewhere, must have stuck some ice in a tank and monitored what happened to it?

Noto and Ulloa’s answer is, in effect, no. “Despite the relevance of melting of floating ice in calm and energetic environments…most experimental and numerical efforts to examine this process, even to date, have either fixed or tightly constrained the position and posture of ice,” they write. “Consequently, the relationships between ice dissolution rate and background fluid flow conditions inferred from these studies are meaningful only when a one-way interaction, from the liquid to the solid phase, dominates the melting dynamics.”

The problem, they continue, is that eliminating these approximations “introduces a significant technical challenge for both laboratory experiments and numerical simulations” thanks to a slew of interactions that would otherwise get swept under the rug. These interactions, in turn, lead to complex dynamics such as drifting, spinning and even flipping that must be incorporated into the model. Consequently, they write, “fundamental questions persist: ‘How long does an ice body last?’”

  • Tracking a melting iceberg: This side view of the experiment shows fluid motions as moving particles and temperature distributions as colours of the thermochromic liquid crystal particles. Meltplume (dark colour) formed beneath the floating ice plunges down, penetrating through the thermally stratified layer (red: cold, blue: warm). Note: this video has no sound. (Courtesy: Noto and Ulloa, Science Advances 12 5 DOI: 10.1126/sciadv.ady352)

To answer this question, Noto and Ulloa used their water-tank observations (see video) to develop a model that incorporates the thermodynamics of ice melting and mass balance conservation. Based on this model, they correctly predict both the melting rate and the lifespan of freely floating ice under self-driven convective flows that arise from interactions between the ice and the calm, fresh water surrounding it. Though the behaviour of ice in tempestuous salty seas is, they write, “beyond our scope”, their model nevertheless provides a useful upper bound on iceberg longevity, with applications for climate modelling as well as (presumably) shipping forecasts for otherwise-doomed ocean liners.

The tube that would not sink

In the unsinkable tube study, the big surprise is that a metal tube, divided in the middle but open at both ends, can continue to float after being submerged, corroded with salt, tossed about on a turbulent sea and peppered with holes. How is that even possible?

“The inside of the tube is superhydrophobic, so water can’t enter and wet the walls,” Guo explains. “As a result, air remains trapped inside, providing buoyancy.”

Importantly, this buoyancy persists even if the tube is damaged. “When the tube is punctured, you can think of it as becoming two, three, or more smaller sections,” Guo tells Physics World. “Each section will work in the same way of preventing water from entering inside, so no matter how many holes you punch into it, the tube will remain afloat.”

So, is there anything that could make these superhydrophobic structures sink?  “I can’t think of any realistic real-world challenges more severe than what we have put them through experimentally,” he says.

We aren’t in unsinkable ship territory yet: the largest structure in the Rochester study was a decidedly un-Titanic-like raft a few centimetres across. But Guo doesn’t discount the possibility. He points out that tubes are made from ordinary aluminium, with a simple fabrication process. “If suitable applications call for it, I believe [human-scale versions] could become a reality within a decade,” he concludes.

Scientists quantify behaviour of micro- and nanoplastics in city environments

Abundance and composition of atmospheric plastics

Plastic has become a global pollutant concern over the last couple of decades: it is widespread in society, not often disposed of effectively, and generates both microplastics (1 µm to 5 mm in size) and nanoplastics (smaller than 1 µm) that have infiltrated many ecosystems – including being found inside humans and animals.

Over time, bulk plastics break down into micro- and nanoplastics through fragmentation mechanisms that create much smaller particles with a range of shapes and sizes. Their small size has become a problem because they are increasingly finding their way into waterways that pollute the environment, into cities and other urban environments, and are now even being transported to remote polar and high-altitude regions.

This poses potential health risks around the world. While the behaviour of micro- and nanoplastics in the atmosphere is poorly understood, it’s thought that they are transported by transcontinental and transoceanic winds, which causes the spread of plastic in the global carbon cycle.

However, the lack of data on the emission, distribution and deposition of atmospheric micro- and nanoplastic particles makes it difficult to definitively say how they are transported around the world. It is also challenging to quantify their behaviour, because plastic particles can have a range of densities, sizes and shapes that undergo physical changes in clouds, all of which affect how they travel.

A global team of researchers has developed a new semi-automated microanalytical method that can quantify atmospheric plastic particles present in air dustfall, rain, snow and dust resuspension. The research was performed across two Chinese megacities, Guangzhou and Xi’an.

“As atmospheric scientists, we noticed that microplastics in the atmosphere have been the least reported among all environmental compartments in the Earth system due to limitations in detection methods, because atmospheric particles are smaller and more complex to analyse,” explains Yu Huang, from the Institute of Earth Environment of the Chinese Academy of Sciences (IEECAS) and one of the paper’s lead authors. “We therefore set out to develop a reliable detection technique to determine whether microplastics are present in the atmosphere, and if so, in what quantities.”

Quantitative detection

For this new approach, the researchers employed a computer-controlled scanning electron microscopy (CCSEM) system equipped with energy-dispersive X-ray spectroscopy to reduce human bias in the measurements (which is an issue in manual inspections). They located and measured individual micro- and nanoplastic particles – enabling their concentration and physicochemical characteristics to be determined – in aerosols, dry and wet depositions, and resuspended road dust.

“We believe the key contribution of this work lies in the development of a semi‑automated method that identifies the atmosphere as a significant reservoir of microplastics. By avoiding the human bias inherent in visual inspection, our approach provides robust quantitative data,” says Huang. “Importantly, we found that these microplastics often coexist with other atmospheric particles, such as mineral dust and soot – a mixing state that could enhance their potential impacts on climate and the environment.”

The method could detect and quantify plastic particles as small as 200 nm, and revealed airborne concentrations of 1.8 × 105 microplastics/m3 and 4.2 × 104 nanoplastics/m3 in Guangzhou and 1.4 × 105 microplastics/m3 and 3.0 × 104 nanoplastics/m3 in Xi’an. This is two to six orders of magnitude higher for both microplastic and nanoplastic fluxes than reported previously via visual methods.

The team also found that the deposition samples were more heterogeneously mixed with other particle types (such as dust and other pollution particles) than aerosols and resuspension samples, which showed that particles tend to aggregate in the atmosphere before being removed during atmospheric transport.

The study revealed transport insights that could be beneficial for investigating the climate, ecosystem and human health impacts of plastic particles at all levels. The researchers are now advancing their method in two key directions.

“First, we are refining sampling and CCSEM‑based analytical strategies to detect mixed states between microplastics and biological or water‑soluble components, which remain invisible with current techniques. Understanding these interactions is essential for accurately assessing microplastics’ climate and health effects,” Huang tells Physics World. “Second, we are integrating CCSEM with Raman analysis to not only quantify abundance but also identify polymer types. This dual approach will generate vital evidence to support environmental policy decisions.”

The research was published in Science Advances.

Michele Dougherty steps aside as president of the Institute of Physics

The space physicist Michele Dougherty has stepped aside as president of the Institute of Physics, which publishes Physics World. The move was taken to avoid any conflicts of interest given her position as executive chair of the Science and Technology Facilities Council (STFC) – one of the main funders of physics research in the UK.

Dougherty, who is based at Imperial College London, spent two years as IOP president-elect from October 2023 before becoming president in October 2025. Dougherty was appointed executive chair of the STFC in January 2025 and in July that year was also announced as the next Astronomer Royal – the first woman to hold the position.

The changes at the IOP come in the wake of UK Research and Innovation (UKRI) stating last month that it will be adjusting how it allocates government funding for scientific research and infrastructure. Spending on curiosity-driven research will remain flat from 2026 to 2030, with UKRI prioritising funding in three key areas or “buckets”.

The three buckets are: curiosity-driven research, which will be the largest; strategic government and societal priorities; and supporting innovative companies. There will also be a fourth “cross-cutting” bucket with funding for infrastructure, facilities and talent. In the four years to 2030, UKRI’s budget will be £38.6bn.

While the detailed implications of the funding changes are still to be worked out, the IOP says its “top priority” is understanding and responding to them. With the STFC being one of nine research councils within UKRI, Dougherty is stepping aside as IOP president to ensure the IOP can play what it says is “a leadership role in advocating for physics without any conflict of interest”.

In her role as STFC executive chair, Dougherty yesterday wrote to the UK’s particle physics, astronomy and nuclear physics community, asking researchers to identify by March how their projects would respond to flat cash as well as reductions of 20%, 40% and 60% – and to “identify the funding point at which the project becomes non-viable”. The letter says that a “similar process” will happen for facilities and labs.

In her letter, Dougherty says that the UK’s science minister Lord Vallance and UKRI chief executive Ian Chapman want to protect curiosity-driven research, which they say is vital, and grow it “as the economy allows”. However, she adds, “the STFC will need to focus our efforts on a more concentrated set of priorities, funded at a level that can be maintained over time”.

Tom Grinyer, chief executive officer of the IOP, says that the IOP is “fully focused on ensuring physics is heard clearly as these serious decisions are shaped”. He says the IOP is “gathering insight from across the physics community and engaging closely with government, UKRI and the research councils so that we can represent the sector with authority and evidence”.

Grinyer warns, however, that UKRI’s shift in funding priorities and the subsequent STFC funding cuts will have “severe consequences” for physics. “The promised investment in quantum, AI, semiconductors and green technologies is welcome but these strengths depend on a stable research ecosystem,” he says.

“I want to thank Michele for her leadership, and we look forward to working constructively with her in her capacity at STFC as this important period for physics unfolds,” adds Grinyer.

Next steps

The nuclear physicist Paul Howarth, who has been IOP president-elect since September, will now take on Dougherty’s responsibilities – as prescribed by the IOP’s charter – with immediate effect, with the IOP Council discussing its next steps at its February 2026 meeting.

With a PhD in nuclear physics, Howarth has had a long career in the nuclear sector working on the European Fusion Programme and at British Nuclear Fuels, as well as co-founding the Dalton Nuclear Institute at the University of Manchester.

He was a non-executive board director of the National Physical Laboratory and until his retirement earlier this year was chief executive officer of the National Nuclear Laboratory.

In response to the STFC letter, Howarth says that the projected cuts “are a devastating blow for the foundations of UK physics”.

“Physics isn’t a luxury we can afford to throw away through confusion,” says Howarth. “We urge the government to rethink these cuts, listen to the physics community, and deliver to a 10-year strategy to secure physics for the future.”

AI-based tool improves the quality of radiation therapy plans for cancer treatment

This episode of the Physics World Weekly podcast features Todd McNutt, who is a medical physicist at Johns Hopkins University and the founder of Oncospace. In a conversation with Physics World’s Tami Freeman, McNutt explains how an artificial intelligence-based tool called Plan AI can help improve the quality of radiation therapy plans for cancer treatments.

As well as discussing the benefits that Plan AI brings to radiotherapy patients and cancer treatment centres, they examine its evolution from an idea developed by an academic collaboration to a clinical product offered today by Sun Nuclear, a US manufacturer of radiation equipment and software.

This podcast is sponsored by Sun Nuclear.

The Future Circular Collider is unduly risky – CERN needs a ‘Plan B’

Last November I visited the CERN particle-physics lab near Geneva to attend the 4th International Symposium on the History of Particle Physics, which focused on advances in particle physics during the 1980s and 1990s. As usual, it was a refreshing, intellectually invigorating visit. I’m always inspired by the great diversity of scientists at CERN – complemented this time by historians, philosophers and other scholars of science.

As noted by historian John Krige in his opening keynote address, “CERN is a European laboratory with a global footprint. Yet for all its success it now faces a turning point.” During the period under examination at the symposium, CERN essentially achieved the “world laboratory” status that various leaders of particle physics had dreamt of for decades.

By building the Large Electron Positron (LEP) collider and then the Large Hadron Collider (LHC), the latter with contributions from Canada, China, India, Japan, Russia, the US and other non-European nations, CERN has attracted researchers from six continents. And as the Cold War ended in 1989–1991, two prescient CERN staff members developed the World Wide Web, helping knit this sprawling international scientific community together and enable extensive global collaboration.

The LHC was funded and built during a unique period of growing globalization and democratization that emerged in the wake of the Cold War’s end. After the US terminated the Superconducting Super Collider in 1993, CERN was the only game in town if one wanted to pursue particle physics at the multi-TeV energy frontier. And many particle physicists wanted to be involved in the search for the Higgs boson, which by the mid-1990s looked as if it should show up at accessible LHC energies.

Having discovered this long-sought particle at the LHC in 2012, CERN is now contemplating an ambitious construction project, the Future Circular Collider (FCC). Over three times larger than the LHC, it would study this all-important, mass-generating boson in greater detail using an electron–positron collider dubbed FCC-ee, estimated to cost $18bn and start operations by 2050.

Later in the century, the FCC-hh, a proton–proton collider, would go in the same tunnel to see what, if anything, may lie at much higher energies. That collider, the cost of which is currently educated guesswork, would not come online until the mid 2070s.

But the steadily worsening geopolitics of a fragmenting world order could make funding and building these colliders dicey affairs. After Russia’s expulsion from CERN, little in the way of its contributions can be expected. Chinese physicists had hoped to build an equivalent collider, but those plans seem to have been put on the backburner for now.

And the “America First” political stance of the current US administration is hardly conducive to the multibillion-dollar contribution likely required from what is today the world’s richest (albeit debt-laden) nation. The ongoing collapse of the rules-based world order was recently put into stark relief by the US invasion of Venezuela and abduction of its president Nicolás Maduro, followed by Donald Trump’s menacing rhetoric over Greenland.

While these shocking events have immediate significance for international relations, they also suggest how difficult it may become to fund gargantuan international scientific projects such as the FCC. Under such circumstances, it is very difficult to imagine non-European nations being able to contribute a hoped-for third of the FCC’s total costs.

But the mounting European populist right-wing parties are no great friends of physics either, nor of international scientific endeavours. And Europeans face the not-insignificant costs of military rearmament in the face of Russian aggression and likely US withdrawal from Europe.

So the other two thirds of the FCC’s many billions in costs cannot be taken for granted – especially not during the decades needed to construct its 91 km tunnel, 350 GeV electron–positron collider, the subsequent 100 TeV proton collider, and the massive detectors both machines require.

According to former CERN director-general Chris Llewellyn Smith in his symposium lecture, “The political history of the LHC“, just under 12% of the material project costs of the LHC eventually came from non-member nations. It therefore warps the imagination to believe that a third of the much greater costs of the FCC can come from non-member nations in the current “Wild West” geopolitical climate.

But particle physics desperately needs a Higgs factory. After the 1983 Z boson discovery at the CERN SPS Collider, it took just six years before we had not one but two Z factories – LEP and the Stanford Linear Collider – which proved very productive machines. It’s now been more than 13 years since the Higgs boson discovery. Must we wait another 20 years?

Other options

CERN therefore needs a more modest, realistic, productive new scientific facility – a “Plan B” – to cope with the geopolitical uncertainties of an imperfect, unpredictable world. And I was encouraged to learn that several possible ideas are under consideration, according to outgoing CERN director-general Fabiola Gianotti in her symposium lecture, “CERN today and tomorrow“.

Three of these ideas reflect the European Strategy for Particle Physics, which states that “an electron–positron Higgs factory is the highest-priority next CERN collider”. Two linear electron–positron colliders would require just 11–34 km of tunnelling and could begin construction in the mid-2030s, but would involve a fair amount of technical risk and cost roughly €10bn.

The least costly and risky option, dubbed LEP3, involves installing superconducting radio-frequency cavities in the existing LHC tunnel once the high-luminosity proton run ends. Essentially an upgrade of the 200 GeV LEP2, this approach is based on well-understood technologies and would cost less than €5bn but can reach at most 240 GeV. The linear colliders could attain over twice that energy, enabling research on Higgs-boson decays into top quarks and the triple-Higgs self-interaction.

Other proposed projects involving the LHC tunnel can produce large numbers of Higgs bosons with relatively minor backgrounds, but they can hardly be called “Higgs factories”. One of these, dubbed the LHeC, could only produce a few thousand Higgs bosons annually and would allow other important research on proton structure functions. Another idea is the proposed Gamma Factory, in which laser beams would be backscattered from LHC beams of partially stripped ions. If sufficient photon energies and intensity can be achieved, it will allow research on the γγ → H interaction. These alternatives would cost at most a few billion euros.

As Krige stressed in his keynote address, CERN was meant to be more than a scientific laboratory at which European physicists could compete with their US and Soviet counterparts. As many of its founders intended, he said, it was “a cultural weapon against all forms of bigoted nationalism and anti-science populism that defied Enlightenment values of critical reasoning”. The same logic holds true today.

In planning the next phase in CERN’s estimable history, it is crucial to preserve this cultural vitality, while of course providing unparalleled opportunities to do world-class science – lacking which, the best scientists will turn elsewhere.

I therefore urge CERN planners to be daring but cognizant of financial and political reality in the fracturing world order. Don’t for a nanosecond assume that the future will be a smooth extrapolation from the past. Be fairly certain that whatever new facility you decide to build, there is a solid financial pathway to achieving it in a reasonable time frame.

The future of CERN – and the bracing spirit of CERN – rests in your hands.

Copyright © 2026 by IOP Publishing Ltd and individual contributors