Skip to main content

Fuel cell catalyst requirements for heavy-duty vehicle applications

Want to learn more on this subject?

Heavy-duty vehicles (HDVs) powered by hydrogen-based proton-exchange membrane (PEM) fuel cells offer a cleaner alternative to diesel-powered internal combustion engines for decarbonizing long-haul transportation sectors. The development path of sub-components for HDV fuel-cell applications is guided by the total cost of ownership (TCO) analysis of the truck.

TCO analysis suggests that the cost of the hydrogen fuel consumed over the lifetime of the HDV is more dominant because trucks typically operate over very high mileages (~a million miles) than the fuel cell stack capital expense (CapEx). Commercial HDV applications consume more hydrogen and demand higher durability, meaning that TCO is largely related to the fuel-cell efficiency and durability of catalysts.

This article is written to bridge the gap between the industrial requirements and academic activity for advanced cathode catalysts with an emphasis on durability. From a materials perspective, the underlying nature of the carbon support, Pt-alloy crystal structure, stability of the alloying element, cathode ionomer volume fraction, and catalyst–ionomer interface play a critical role in improving performance and durability.

We provide our perspective on four major approaches, namely, mesoporous carbon supports, ordered PtCo intermetallic alloys, thrifting ionomer volume fraction, and shell-protection strategies that are currently being pursued. While each approach has its merits and demerits, their key developmental needs for future are highlighted.

Want to learn more on this subject?

Nagappan Ramaswamy joined the Department of Chemical Engineering at IIT Bombay as a faculty member in January 2025. He earned his PhD in 2011 from Northeastern University, Boston specialising in fuel cell electrocatalysis.

He then spent 13 years working in industrial R&D – two years at Nissan North American in Michigan USA focusing on lithium-ion batteries, followed by 11 years at General Motors in Michigan USA focusing on low-temperature fuel cells and electrolyser technologies. While at GM, he led two multi-million-dollar research projects funded by the US Department of Energy focused on the development of proton-exchange membrane fuel cells for automotive applications.

At IIT Bombay, his primary research interests include low-temperature electrochemical energy-conversion and storage devices such as fuel cells, electrolysers and redox-flow batteries involving materials development, stack design and diagnostics.

Ask me anything: Mažena Mackoit-Sinkevičienė – ‘Above all, curiosity drives everything’

What skills do you use every day in your job?

Much of my time is spent trying to build and refine models in quantum optics, usually with just a pencil, paper and a computer. This requires an ability to sit with difficult concepts for a long time, sometimes far longer than is comfortable, until they finally reveal their structure.

Good communication is equally essential – I teach students; collaborate with colleagues from different subfields; and translate complex ideas into accessible language for the broader public. Modern physics connects with many different fields, so being flexible and open-minded matters as much as knowing the technical details. Above all, curiosity drives everything. When I don’t understand something, that uncertainty becomes my strongest motivation to keep going.

What do you like best and least about your job?

What I like the best is the sense of discovery – the moment when a problem that has evaded understanding for weeks suddenly becomes clear. Those flashes of insight feel like hearing the quiet whisper of nature itself. They are rare, but they bring along a joy that is hard to find elsewhere.

I also value the opportunity to guide the next generation of physicists, whether in the university classroom or through public science communication. Teaching brings a different kind of fulfilment: witnessing students develop confidence, curiosity and a genuine love for physics.

What I like the least is the inherent uncertainty of research. Questions do not promise favourable answers, and progress is rarely linear. Fortunately, I have come to see this lack of balance not as a weakness but as a source of power that forces growth, new perspectives, and ultimately deeper understanding.

What do you know today that you wish you knew when you were starting out in your career?

I wish I had known that feeling lost is not a sign of inadequacy but a natural part of doing physics at a high level. Not understanding something can be the greatest motivator, provided one is willing to invest time and effort. Passion and curiosity matter far more than innate brilliance. If I had realized earlier that steady dedication can carry you farther than talent alone, I would have embraced uncertainty with much more confidence.

Modelling wavefunction collapse as a continuous flow yields insights on the nature of measurement

“God does not play dice.”

With this famous remark at the 1927 Solvay Conference, Albert Einstein set the tone for one of physics’ most enduring debates. At the heart of his dispute with Niels Bohr lay a question that continues to shape the foundations of physics: does the apparently probabilistic nature of quantum mechanics reflect something fundamental, or is it simply due to lack of information about some “hidden variables” of the system that we cannot access?

Physicists at University College London, UK (UCL) have now addressed this question via the concept of quantum state diffusion (QSD). In QSD, the wavefunction does not collapse abruptly. Instead, wavefunction collapse is modelled as a continuous interaction with the environment that causes the system to evolve gradually toward a definite state, restoring some degree of intuition to the counterintuitive quantum world.

A quantum coin toss

To appreciate the distinction (and the advantages it might bring), imagine tossing a coin. While the coin is spinning in midair, it is neither fully heads nor fully tails – its state represents a blend of both possibilities. This mirrors a quantum system in superposition.

When the coin eventually lands, the uncertainty disappears and we obtain a definite outcome. In quantum terms, this corresponds to wavefunction collapse: the superposition resolves into a single state upon measurement.

In the standard interpretation of quantum mechanics, wavefunction collapse is considered instantaneous. However, this abrupt transition is challenging from a thermodynamic perspective because uncertainty is closely tied to entropy. Before measurement, a system in superposition carries maximal uncertainty, and thus maximum entropy. After collapse, the outcome is definite and our uncertainty about the system is reduced, thereby reducing the entropy.

This apparent reduction in entropy immediately raises a deeper question. If the system suddenly becomes more ordered at the moment of measurement, where does the “missing” entropy go?

From instant jumps to continuous flows

Returning to the coin analogy, imagine that instead of landing cleanly and instantly revealing heads or tails, the coin wobbles, leans, slows and gradually settles onto one face. The outcome is the same, but the transition is continuous rather than abrupt.

This gradual settling captures the essence of QSD. Instead of an instantaneous “collapse”, the quantum state unfolds continuously over time. This makes it possible to track various parameters of thermodynamic change, including a quantity called environmental stochastic entropy production that measures how irreversible the process is.

Another benefit is that whereas standard projective measurements describe an abrupt “yes/no” outcome, QSD models a broader class of generalized or “weak” measurements, revealing the subtle ways quantum systems evolve. It also allows physicists to follow individual trajectories rather than just average outcomes, uncovering details that the standard framework smooths over.

“The QSD framework helps us understand how unpredictable environmental influences affect quantum systems,” explains Sophia Walls, a PhD student at UCL and the first author of a paper in Physical Review A on the research. Environmental noise, Walls adds, is particularly important for quantum technologies, making the study’s insights valuable for quantum error correction, control protocols and feedback mechanisms.

Bridging determinism and probability

At first glance, QSD might seem to resemble decoherence, which also arises from system–environment interactions such as noise. But the two differ in scope. “Decoherence explains how a system becomes a classical mixed state,” Walls clarifies, “but not how it ultimately purifies into a single eigenstate.” QSD, with its stochastic term, describes this final purification – the point where the coin’s faint shimmer sharpens into heads or tails.

In this view, measurement is not a single act but a continuous, entropy-producing flow of information between system and environment – a process that gradually results in manifestation of one of the possible quantum states, rather than an abrupt “collapse”.

“Standard quantum mechanics separates two kinds of dynamics – the deterministic Schrödinger evolution and the probabilistic, instantaneous collapse,” Walls notes. “QSD connects both in a single dynamical equation, offering a more unified description of measurement.”

This continuous evolution makes otherwise intractable quantities, such as entropy production, measurable and meaningful. It also breathes life into the wavefunction itself. By simulating individual realizations, QSD distinguishes between two seemingly identical mixed states: one genuinely entangled with its environment, and another that simply represents our ignorance. Only in the first case does the system dynamically evolve – a distinction invisible in the orthodox picture.

A window on quantum gravity?

Could this diffusion-based framework also illuminate other fundamental questions beyond the nature of measurement? Walls thinks it’s possible. Recent work suggests that stochastic processes could provide experimental clues about how gravity behaves at the quantum scale. QSD may one day offer a way to formalize or test such ideas. “If the nature of quantum gravity can be studied through a diffusive or stochastic process, then QSD would be a relevant framework to explore it,” Walls says.

NPL unveils miniature atomic fountain clock  

A miniature version of an atomic fountain clock has been unveiled by researchers at the UK’s National Physical Laboratory (NPL). Their timekeeper occupies just 5% of the volume of a conventional atomic fountain clock while delivering a time signal with a stability that is on par with a full-sized system. The team is now honing its design to create compact fountain clocks that could be used in portable systems and remote locations.

The ticking of an atomic clock is defined by the frequency of the electromagnetic radiation that is absorbed and emitted by a specific transition between atomic energy levels. Today, the second is defined using a transition in caesium atoms that involves microwave radiation. Caesium atoms are placed in a microwave cavity and a measurement-and-feedback mechanism is used to tune the frequency of the cavity radiation to the atomic transition – creating a source of microwaves with a very narrow frequency range centred at the clock frequency.

The first atomic clocks sent a fast-moving beam of atoms through a microwave cavity. The precision of such a beam clock is limited by the relatively short time that individual atoms spend in the cavity. Also, the speed of the atoms means that the measured frequency peak is shifted and broadened by the Doppler effect.

Launching atoms

These problems were addressed by the development of the fountain clock, in which the atoms are cooled (slowed down) by laser light, which also launches the atoms upwards. The atoms pass through a microwave cavity on the way up, and again as they fall back down. The atoms travel at much slower speeds than in a beam clock. The atoms spend much more time in the cavity and therefore the time signal from an atomic clock is much more precise than a beam clock. However, long times result in greater thermal spread of the atomic beam – which degrades clock performance. Trading-off measurement time with thermal spread means that the caesium fountain clocks that currently define the second have drops of about 30 cm.

Other components are also needed to operate fountain clocks – including a vacuum system and laser and microwave instrumentation. This pushes the height of a typical clock to about 2 m, and makes it a complex and expensive instrument that cannot be easily transported.

Now, Sam Walby and colleagues at NPL have shrunk the overall height of a rubidium-based fountain clock to 80 cm, while retaining the 30 cm drop. The result is an instrument that is 5% the volume of one of NPL’s conventional caesium atomic fountain clocks.

Precise yet portable

“That’s taking it from barely being able to fit though a doorway, to something one could pick up and carry with one arm,” says Walby.

Despite the miniaturization, the mini-fountain achieved a stability of one part in 1015 after several days of operation – which NPL says is comparable to full-sized clocks.

Walby told Physics World that the NPL team achieved miniaturization by eliminating two conventional components from their clock design. One is a dedicated chamber used to measure the quantum states of the atoms. Instead, this measurement is make within the clock’s cooling chamber. Also eliminated is a dedicated state-selection microwave cavity, which puts the atoms into the quantum state from which the clock transition occurs.

“The mini-fountain also does this [state] selection,” explains Walby, “but instead of using a dedicated cavity, we use a coax-to-waveguide adapter that is directed into the cooling chamber, which creates a travelling wave of microwaves at the correct frequency.”

The NPL team also reduced the amount of magnetic shielding used, which meant that the edge-effects of the magnetic field had to be more carefully considered. The optics system of the clock was greatly simplified and the use of commercial components mean that the clock is low maintenance and easy to operate – according to NPL.

Radical simplification

“By radically simplifying and shrinking the atomic fountain, we’re making ultra-precise timing technology available beyond national labs,” said Walby. “This opens new possibilities for resilient infrastructure and next-generation navigation.”

According to Walby, one potential use of a miniature atomic fountain clock is as a holdover clock. These are devices that produce a very stable time signal when not synchronized with other atomic clocks. This is important for creating resilience in infrastructure that relies on precision timing – such as communications networks, global navigation satellite systems (including GPS) and power grids. Synchronization is usually done using GNSS signals but these can be jammed or spoofed to disrupt timing systems.

Holdover clocks require time errors of just a few nanoseconds over a month, which the new NPL clock can deliver. The miniature atomic clock could also be used as a secondary frequency standard for the SI second.

The small size of the clock also lends itself to portable and even mobile applications, according to Walby: “The adaptation of the mini-fountain technology to mobile platforms will be subject of further developments”.

However, the mini-clock is large when compared to more compact or chip-based clocks – which do not perform as well. Therefore, he believes that the technology is more likely to be implemented on ships or ground vehicles than aircraft.

“At a minimum, it should be easily transportable compared to the current solutions of similar performance,” he says.

“Highly innovative”

Atomic-clock expert Elizabeth Donley tells Physics World, “NPL has been highly innovative in recent years in standardizing fountain clock designs and even supplying caesium fountains to other national standards labs and organizations around the world for timekeeping purposes. This new compact rubidium fountain is a continuation of this work and can provide a smaller frequency standard with comparable performance to the larger fountains based on caesium.”

Donley spent more than two decades developing atomic clocks at the US National Institute of Standards and Technology (NIST) and now works as a consultant in the field. She agrees that miniature fountain clocks would be useful for holding-over timing information when time signals are interrupted.

She adds, “Once the international community decides to redefine the second to be based on an optical transition, it won’t matter if you use rubidium or caesium. So I see this work as more of a practical achievement than a ground-breaking one. Practical achievements are what drives progress most of the time.”

The new clock is described in Applied Physics Letters.

Shining laser light on a material produces subtle changes in its magnetic properties

Researchers in Switzerland have found an unexpected new use for an optical technique commonly used in silicon chip manufacturing. By shining a focused laser beam onto a sample of material, a team at the Paul Scherrer Institute (PSI) and ETH Zürich showed that it was possible to change the material’s magnetic properties on a scale of nanometres – essentially “writing” these magnetic properties into the sample in the same way as photolithography etches patterns onto wafers. The discovery could have applications for novel forms of computer memory as well as fundamental research.

In standard photolithography – the workhorse of the modern chip manufacturing industry – a light beam passes through a transmission mask and projects an image of the mask’s light-absorption pattern onto a (usually silicon) wafer. The wafer itself is covered with a photosensitive polymer called a resist. Changing the intensity of the light leads to different exposure levels in the resist-covered material, making it possible to create finely detailed structures.

In the new work, Laura Heyderman and colleagues in PSI-ETH Zürich’s joint Mesoscopic System group began by placing a thin film of a magnetic material in a standard photolithography machine, but without a photoresist. They then scanned a focused laser beam over the surface of the sample while modulating the beam’s wavelength of 405 nm to deliver varying intensities of light. This process is known as direct write laser annealing (DWLA), and it makes it possible to heat areas of the sample that measure just 150 nm across.

In each heated area, thermal energy from the laser is deposited at the surface and partially absorbed by the film down to a depth of around 100 nm). The remainder dissipates through a silicon substrate coated in 300-nm-thick silicon oxide. However, the thermal conductivity of this substrate is low, which maximizes the temperature increase in the film for a given laser fluence. The researchers also sought to keep the temperature increase as uniform as possible by using thin-film heterostructures with a total thickness of less than 20 nm.

Crystallization and interdiffusion effects

Members of the PSI-ETH Zürich team applied this technique to several technologically important magnetic thin-film systems, including ferromagnetic CoFeB/MgO, ferrimagnetic CoGd and synthetic antiferromagnets composed of Co/Cr, Co/Ta or CoFeB/Pt/Ru. They found that DWLA induces both crystallization and interdiffusion effects in these materials. During crystallization, the orientation of the sample’s magnetic moments gradually changes, while interdiffusion alters the magnetic exchange coupling between the layers of the structures.

The researchers say that both phenomena could have interesting applications. The magnetized regions in the structures could be used in data storage, for example, with the direction of the magnetization (“up” or “down”) corresponding to the “1” or “0” of a bit of data. In conventional data-storage systems, these bits are switched with a magnetic field, but team member Jeffrey Brock explains that the new technique allows electric currents to be used instead. This is advantageous because electric currents are easier to produce than magnetic fields, while data storage devices switched with electricity are both faster and capable of packing more data into a given space.

Team member Lauren Riddiford says the new work builds on previous studies by members of the same group, which showed it was possible to make devices suitable for computer memory by locally patterning magnetic properties. “The trick we used here was to locally oxidize the topmost layer in a magnetic multilayer,” she explains. “However, we found that this works only in a few systems and only produces abrupt changes in the material properties. We were therefore brainstorming possible alternative methods to create gradual, smooth gradients in material properties, which would open possibilities to even more exciting applications and realized that we could perform local annealing with a laser originally made for patterning polymer resist layers for photolithography.”

Riddiford adds that the method proved so fast and simple to implement that the team’s main challenge was to investigate all the material changes it produced. Physical characterization methods for ultrathin films can be slow and difficult, she tells Physics World.

The researchers, who describe their technique in Nature Communications, now hope to use it to develop structures that are compatible with current chip-manufacturing technology. “Beyond magnetism, our approach can be used to locally modify the properties of any material that undergoes changes when heated, so we hope researchers using thin films for many different devices – electronic, superconducting, optical, microfluidic and so on – could use this technique to design desired functionalities,” Riddiford says. “We are looking forward to seeing where this method will be implemented next, whether in magnetic or non-magnetic materials, and what kind of applications it might bring.”

The obscure physics theory that helped Chinese science emerge from the shadows

“The Straton Model of elementary particles had very limited influence in the West,” said Jinyan Liu as she sat with me in a quiet corner of the CERN cafeteria. Liu, who I caught up with during a break in a recent conference on the history of particle physics, was referring to a particular model of elementary particle physics first put together in China in the mid-1960s. The Straton Model was, and still largely is, unknown outside that country. “But it was an essential step forward,” Liu added, “for Chinese physicists in joining the international community.”

Liu was at CERN to give a talk on how Chinese theorists redirected their research efforts in the years after the Cultural Revolution, which ended in 1976. They switched from the Straton Model, which was a politically infused theory of matter favoured by Mao Zedong, the founder of the People’s Republic of China, to mainstream particle physics as practised by the rest of the world. It’s easy to portray the move as the long-overdue moment when Chinese scientists resumed their “real” physics research. But, Liu told me, “actually it was much more complicated”.

A physicist by training, Liu received her PhD on contemporary theories of spontaneous charge-parity (CP) violation from the Institute of Theoretical Physics at the Chinese Academy of Sciences (CAS) in 2013. She then switched to the CAS Institute for the History of Natural Sciences, where she was its first member with a physics PhD. Her initial research topic was the history and development of the Straton Model.

The model is essentially a theory of the structure of hadrons – either baryons (such as protons and neutrons) or mesons (such as pions and kaons). But the model’s origins are as improbable as they are labyrinthine. Mao, who had a keen interest in natural science, was convinced that matter was infinitely divisible, and in 1963 he came across an article by the Marxist-inspired Japanese physicist Shoichi Sakata (1911–1970).

First published in Japanese in 1961 and later translated into Russian, Sakata’s paper was entitled “Dialogues concerning a new view of elementary particles”. It restated Sakata’s belief, which he had been working on since the 1950s, that hadrons are made of smaller constituents – “elementary particles are not the ultimate elements of matter” as he put it. With some Chinese scholars back then still paying close attention to publications from the Soviet Union, their former political and ideological ally, that paper was then translated into Chinese.

Mao Zedong was engrossed in Shoichi Sakata’s paper, for it seemed to offer scientific support for his own views.

This version appeared in the Bulletin of the Studies of Dialectics of Nature in 1963. Mao, who received an issue of that bulletin from his son-in-law, was engrossed in Sakata’s paper, for it seemed to offer scientific support for his own views. Sakata’s article – both in the original Japanese and now in Chinese – cited Friedrich Engels’ view that matter has numerous stages of discrete but qualitatively different parts. In addition, it quoted Lenin’s remark that “even the electron is inexhaustible”.

A wider dimension

“International politics now also entered,” Liu told me, as we discussed the issue further at CERN. A split between China and the Soviet Union had begun to open up in the late 1950s, with Mao breaking off relations with the Soviet Union and starting to establish non-governmental science and technology exchanges between China and Japan. Indeed, when China hosted the Peking Symposium of foreign scientists in 1964, Japan brought the biggest delegation, with Sakata as its leader.

At the event, Mao personally congratulated Sakata on his theory. It was, Sakata later recalled, “the most unforgettable moment of my journey to China”. In 1965, Sakata’s paper was retranslated from the Japanese original, with an annotated version published in Red Flag and the newspaper Renmin ribao, or “People’s Daily”, both official organs of the Chinese Communist Party.

Chinese physicists realized that they could capitalize on Mao’s enthusiasm to make elementary particle physics a legitimate research direction.

Chinese physicists, who had been assigned to work on the atomic bomb and other research deemed important by the Communist Party, now started to take note. Uninterested in philosophy, they realized that they could capitalize on Mao’s enthusiasm to make elementary particle physics a legitimate research direction.

As a result, 39 members of CAS, Peking University and the University of Science and Technology of China formed the Beijing Elementary Particle Group. Between 1965 and 1966, they wrote dozens of papers on a model of hadrons inspired by both Sakata’s work and quark theory based on the available experimental data. It was dubbed the Straton Model because it involved layers or “strata” of particles nested in each other.

Liu has interviewed most surviving members of the group and studied details of the model. It differed from the model being developed at the time by the US theorist Murray Gell-Mann, which saw quarks as not physical but mathematical elements. As Liu discovered, Chinese particle physicists were now given resources they’d never had before. In particular, they could use computers, which until then had been devoted to urgent national defence work. “To be honest,” Liu chuckled, “the elementary particle physicists didn’t use computers much, but at least they were made available.”

The high-water mark for the Straton Model occurred in July 1966 when members of the Beijing Elementary Particle Group presented it at a summer physics colloquium organized by the China Association for Science and Technology. The opening ceremony was held in Tiananmen Square, in what was then China’s biggest conference centre, with attendees including Abdus Salam from Imperial College London. The only high-profile figure to be invited from the West, Salam was deemed acceptable because he was science advisor to the president of Pakistan, a country considered outside the western orbit.

The proceedings of the colloquium were later published as “Research on the theory of elementary particles carried out under the brilliant illumination of Mao Tse-Tung’s thought”. Its introduction was what Liu calls a “militant document” – designed to reinforce the idea that the authors were carrying Mao’s thought into scientific research to repudiate “decadent feudal, bourgeois and revisionist ideologies”.

Participants in Beijing had expected to make their advances known internationally by publishing the proceedings in English. But the Cultural Revolution had just begun two months before, and publications in English were forbidden. “As a result,” Liu told me, “the model had very limited influence outside China.” Sakata, however, had an important influence on Japanese theorists having co-authored the key paper on neutrino flavour oscillation (Prog. Theoretical. Physics 28 870).

A resurfaced effort

In recent years, Liu has shed new light on the Straton Model, writing a paper in the journal Chinese Annals of History of Science and Technology (2 85). In 2022, she also published a Chinese-language book entitled Constructing a Theory of Hadron Structure: Chinese Physicists’ Straton Model, which describes the downfall of the model after 1966. None of its predicted material particles appeared, though a candidate event once occurred in a cosmic ray observatory in the south of China.

By 1976, quantum chromodynamics (QCD) had convincingly emerged as the established model of hadrons. The effective end of the Straton Model took place at a conference in January 1980 in Conghua, near Hong Kong. Hung-Yuan Tzu, one of the key leaders of the Beijing Group, gave a paper entitled “Reminiscences of the Straton Model”, signalling that physics had moved on.

During our meeting at CERN, Liu showed me photos of the 1980 event. “It was a very important conference in the history of Chinese physics,” she said, “the first opening to Chinese physicists in the West”. Visits by Chinese expatriates were organized by Tsung-Dao Lee and Chen-Ning Yang, who shared the 1957 Nobel Prize for Physics for their work on parity violation.

The critical point

It is easy for westerners to mock the Straton Model; Sheldon Glashow once referred to it as about “Maons”. But Liu sees it as significant research that had many unexpected consequences, such as helping to advance physics research in China. “It gave physicists a way to pursue quantum field theory without having to do national defence work”.

The model also trained young researchers in particle physics and honed their research competence. After the post-Cultural Revolution reform and its opening to the West, these physicists could then integrate into the international community. “The story,” Liu said, “shows how ingeniously the Chinese physicists adapted to the political situation.”

A surprising critical state emerges in active nematic materials

Nematics are materials made of rod‑like particles that tend to align in the same direction. In active nematics, this alignment is constantly disrupted and renewed because the particles are driven by internal biological or chemical energy. As the orientation field twists and reorganises, it creates topological defects-points where the alignment breaks down. These defects are central to the collective behaviour of active matter, shaping flows, patterns, and self‑organisation.

In this work, the researchers identify an active topological phase transition that separates two distinct regimes of defect organisation. As the system approaches this transition from below, the dynamics slow dramatically: the relaxation of defect density becomes sluggish, fluctuations in the number of defects grow in amplitude and lifetime, and the system becomes increasingly sensitive to small changes in activity. At the critical point, defects begin to interact over long distances, with correlation lengths that grow with system size. This behaviour produces a striking dual‑scaling pattern, defect fluctuations appear uniform at small scales but become anti‑hyperuniform at larger scales, meaning that the number of defects varies far more than expected from a random distribution.

A key finding is that this anti‑hyperuniformity originates from defect clustering. Rather than forming ordered structures or undergoing phase separation, defects tend to appear near existing defects, creating multiscale clusters. This distinguishes the transition from well‑known defect‑unbinding processes such as the Berezinskii-Kosterlitz-Thouless transition in passive nematics or the nematic-isotropic transition in screened active systems. Above the critical activity, the system enters a defect‑laden turbulent state where defects are more uniformly distributed and correlations become short‑ranged and negative.

The researchers confirm these behaviours experimentally using large‑field‑of‑view measurements of endothelial cell monolayers which are the cells that line blood vessels. The same dual‑scaling behaviour, long‑range correlations, and clustering appear in these living tissues, demonstrating that the transition is robust across system sizes, parameter variations, frictional damping, and boundary conditions.

Read the full article

Anti-hyperuniform critical states of active topological defects

Simon Guldager Andersen et al 2025 Rep. Prog. Phys. 88 108101

Do you want to learn more about this topic?

Active phase separation: new phenomenology from non-equilibrium physics M E Cates and C Nardini (2025)

Non-Abelian anyons: anything but easy

Topological quantum computing is a proposed approach to building quantum computers that aims to solve one of the biggest challenges in quantum technology: error correction.

In conventional quantum systems, qubits are extremely sensitive to their environment and even tiny disturbances can cause errors. Topological quantum computing addresses this by encoding information in the global properties of a system: the topology of certain quantum states.

These systems rely on the use of non-Abelian anyons, exotic quasiparticles that can exist in two-dimensional materials under special conditions.

The main challenge faced by this approach to quantum computing is the creation and control of these quasiparticles.

One possible source of non-Abelian anyons is the fractional quantum Hall state (FQH): an exotic state of matter which can exist at very low temperatures and high magnetic fields.

These states come in two forms: even-denominator and odd-denominator. Here, we’re interested in the even-denominator states – the more interesting but less well understood of the two.

In this latest work, researchers have observed this exotic state in gallium arsenide (GaAs) two-dimensional hole systems.

Typically, FQH states are isotropic, showing no preferred direction. Here, however, the team found states that are strongly anisotropic, suggesting that the system spontaneously breaks rotational symmetry.

This means that it forms a nematic phase – similar to liquid crystals – where molecules align along a direction without forming a rigid structure.

This spontaneous symmetry breaking adds complexity to the state and can influence how quasiparticles behave, interact, and move.

The observation of the existence of spontaneous nematicity in an even-denominator fractional quantum Hall state is the first of its kind.

Although there are many questions left to be answered, the properties of this system could be hugely important for topological quantum computers as well as other novel quantum technologies.

Read the full article

Even-denominator fractional quantum Hall states with spontaneously broken rotational symmetry – IOPscience

C. Wang et al 2025 Rep. Prog. Phys. 88 100501

Physicist Norbert Holtkamp takes over as head of Fermilab

Norbert Holtkamp

Particle physicist Norbert Holtkamp has been appointed the new director of Fermi National Accelerator Laboratory. He took up the position on 12 January, replacing Young-Kee Kim from the University of Chicago, who held the job on an interim basis following the resignation of Lia Merminga last year.

With a PhD in physics from the Technical University in Darmstadt, Germany, Holtkamp has managed large scientific projects throughout his career.

Holtkamp is the former deputy director of the SLAC National Accelerator Laboratory at Stanford University where he managedthe construction of the Linac Coherent Light Source upgrade, the world’s most powerful X-ray laser, along with more than $2bn of onsite construction projects.

Holtkamp also previously served as the principal deputy director general for the international fusion project ITER, which is currently under construction in Cadarache, France.

Holtkamp worked at Fermilab between 1998 and 2001, where he worked on commissioning the Main Injector and also led a study on the feasibility of an intense neutrino source based on a muon storage ring.

One of Holtkamp’s main aims as Fermilab boss will be to oversee the completion of the $5bn Long-Baseline Neutrino Facility-Deep Underground Neutrino Experiment (LBNF-DUNE) at Fermilab, which is expected to come online towards the end of the decade.

LBNF-DUNE will study the properties of neutrinos in unprecedented detail, as well as the differences in behaviour between neutrinos and antineutrinos. The DUNE detector, which lies about 1300 km from Fermilab, will measure the neutrinos that are generated by Fermilab’s accelerator complex, which is just outside Chicago.

In a statement, Holtkamp said he is “deeply honoured” to lead the lab. “Fermilab has done so much to advance our collective understanding of the fundamentals of our universe,” he says. “I am committed to ensuring the laboratory remains the neutrino capital of the world, and the safe and successful completion of LBNF-DUNE is key to that goal. I’m excited to rejoin Fermilab at this pivotal moment to guide this project and our other important modernization efforts to prepare the lab for a bright future.”

Managerial experience

Fermilab has experienced a difficult few years, with questions raised about its internal management and external oversight. In August 2024 a group of anonymous self-styled whistleblowers published a 113-page “white paper” on the arXiv preprint server, asserting that the lab was “doomed without a management overhaul”.

Then in October that year, a new organization – Fermi Forward Discovery Group – was announced to manage the lab for the US Department of Energy. That move came under scrutiny given it is dominated by the University of Chicago and Universities Research Association (URA), a consortium of research universities, which had already been part of the management since 2007. Then a month later, almost 2.5% of Fermilab’s employees were laid off.

“We’re excited to welcome Norbert, who brings of a wealth of scientific and managerial experience to Fermilab,” noted University of Chicago president Paul Alivisatos, who is also chair of the board of directors of Fermi Forward Discovery Group.

Alivisatos thanked Kim for her “tireless service” as director. “[Kim] played a critical role in strengthening relationships with Fermilab’s leading stakeholders, driving the lab’s modernization efforts, and positioning Fermilab to amplify DOE’s broader goals in areas like quantum science and AI,” added Alivisatos.

CERN accepts $1bn in private cash towards Future Circular Collider

The CERN particle-physics lab near Geneva has received $1bn from private donors towards the construction of the Future Circular Collider (FCC). The cash marks the first time in the lab’s 72-year history that individuals and philanthropic foundations have agreed to support a major CERN project. If built, the FCC would be the successor to the Large Hadron Collider (LHC), where the Higgs boson was discovered.

CERN originally released a four-volume conceptual design report for the FCC in early 2019, with more detail included in a three-volume feasibility study that came out last year. It calls for a giant tunnel some 90.7 km in circumference – roughly three times as long as the LHC  – that would be built about 200 m underground on average.

The FCC has been recommended as the preferred option for the next flagship collider at CERN in the ongoing process to update the European Strategy for Particle Physics, which will be passed over to the  CERN Council in May 2026.If the plans are given the green light by CERN Council in 2028, construction on the FCC electron-positron machine, dubbed FCC-ee, would begin in 2030. It would start operations in 2047, a few years after the High Luminosity LHC (HL-LHC) closes down, and run for about 15 years until the early 2060s.

The FCC-ee would focus on creating a million Higgs particles in total to allow physicists to study its properties with an accuracy an order of magnitude better that possible with the LHC. The FCC feasibility study then calls for a hadron machine, dubbed FCC-hh, to replace the FCC-ee in the existing 91 km tunnel. It would be a “discovery machine”, smashing together protons at high energy – about 85 TeV – with the aim of creating new particles. If built, the FCC-hh will begin operation in 2073 and run to the end of the century.

The funding model for the FCC-ee, which is expected to have a price tag of about $18bn, is still a work in progress. But it is estimated that at least two-thirds of the construction costs will come from CERN’s 24 member states with the rest needing to be found elsewhere. One option to plug that gap is private donations and in late December CERN received a significant boost from several organizations including the Breakthrough Prize Foundation, the Eric and Wendy Schmidt Fund for Strategic Innovation, and the entrepreneurs John Elkann and Xavier Niel. Together, they pledged a total of $1bn towards the FCC-ee.

Costas Fountas, president of the CERN Council, says CERN is “extremely grateful” for the interest. “This once again demonstrates CERN’s relevance and positive impact on society, and the strong interest in CERN’s future that exists well beyond our own particle physics community,” he notes.

Eric Schmidt, who founded Google, claims that he and Wendy Schmidt were “inspired by the ambition of this project and by what it could mean for the future of humanity”. The FCC, he believes, is an instrument that “could push the boundaries of human knowledge and deepen our understanding of the fundamental laws of the Universe” and could lead to technologies that could benefit society “in profound ways” from medicine to computing to sustainable energy.

The cash promised has been welcomed by outgoing CERN director-general Fabiola Gianotti. “It’s the first time in history that private donors wish to partner with CERN to build an extraordinary research instrument that will allow humanity to take major steps forward in our understanding of fundamental physics and the universe,” she said. “I am profoundly grateful to them for their generosity, vision, and unwavering commitment to knowledge and exploration.”

Further boost

The cash comes a few months after the Circular Electron–Positron Collider (CEPC) – a rival collider to the FCC-ee that also involves building a huge 100 km tunnel to study the Higgs in unprecedented detail – was not considered for inclusion in China’s next five-year plan, which runs from 2026 to 2030. There has been much discussion in China about whether the CEPC is the right project for the country, with the collider facing criticism from particle physicist and Nobel laureate Chen-Ning Yang, before he died last year.

Wang Yifang of the Institute of High Energy Physics (IHEP) in Beijing says they will submit the CEPC for consideration again in 2030 unless FCC is officially approved before then. But for particle theorist John Ellis from Kings College London, China’s decision to effectively put the CEPC on the back burner  “certainly simplifies the FCC discussion”. “However, an opportunity for growing the world particle physics community has been lost, or at least deferred [by the decision],” Ellis told Physics World.

Ellis adds, however, that he would welcome China’s participation in the FCC. “Their accelerator and detector [technical design reviews] show that they could bring a lot to the table, if the political obstacles can be overcome,” he says.

However, if the FCC-ee goes ahead China could perhaps make significant “in-kind” contributions rather like those that occur with the ITER experimental fusion reactor, which is currently being built in France. In this case, instead of cash payments, the countries provide components, equipment and other materials.

Those considerations and more will now fall to the British physicist Mark Thomson, who took over from Gianotti as CERN director-general on 1 January for a five-year term. As well as working on funding requirements for the FCC-ee, top of his in-tray will actually be shutting down the LHC in June to make way for further work on the HL-LHC, which involves installing powerful new superconducting magnets and improving the detection.

About 90% of the 27 km LHC accelerator will be affected by the upgrade with a major part being to replace the magnets in the final focus systems of the two large experiments, ATLAS and CMS. These magnets will take the incoming beams and then focus them down to less than 10 µm in cross section. The upgrade includes the installation of brand new state-of-the-art niobium-tin (Nb3Sn) superconducting focusing magnets.

The HL-LHC will probably not turn on until 2030, at which time Thomson’s term will nearly be over, but that doesn’t deter him from leading the world’s foremost particle-physics lab. “It’s an incredibly exciting project,” Thomson told the Guardian. “It’s more interesting than just sitting here with the machine hammering away.”

Copyright © 2026 by IOP Publishing Ltd and individual contributors