Skip to main content
Read more on IOPscience

Revealing hidden orbital topology in light-element materials

Topological insulators are insulators in the bulk and conductors on the surface. This behaviour is caused by spin-orbit coupling, a property that is stronger in heavier elements. Therefore, most topological insulators are made using heavy elements, such as bismuth selenide (Bi₂Se₃) and antimony telluride (Sb₂Te₃). In this research, the authors introduce orbital Chern insulators, a topological phase in which the orbital angular momentum of electrons, rather than their spin, drives the nontrivial topology. This allows topological behaviour to emerge in materials composed of much lighter elements, demonstrated using monolayer blue phosphorus, which was previously regarded as a trivial insulator.

The authors introduce a feature‑spectrum topology framework, a systematic method for identifying and characterizing materials with orbital‑driven topology. Using this approach, they show that phosphorene hosts the first pure orbital Chern insulator, where the orbital topology is fully disentangled from spin and valley degrees of freedom. As a result, the material exhibits a pure orbital Hall effect that can be experimentally distinguished from spin and valley Hall responses, unlike in transition‑metal dichalcogenides where spin-orbit coupling and valley physics are intertwined.

Because orbital Chern insulators do not rely on spin-orbit coupling, they are not constrained by the small band gaps typical of spin-orbit coupling driven topological insulators, and can potentially support larger band gaps in light‑element systems. The authors also show that orbital nontriviality is expected more broadly in Group 5A monolayers with buckled or puckered structures, expanding the landscape of candidate materials. This research opens a path for orbitronics, where currents of orbital angular momentum instead of spin currents used in spintronics, can be generated, controlled, and applied in future quantum and electronic devices.

Read the full article

Orbital topology induced orbital Hall effect in two-dimensional insulators

Yueh-Ting Yao et al 2026 Rep. Prog. Phys. 89 018001

Do you want to learn more about this topic?

Interacting topological insulators: a review by Stephan Rachel (2018)

Decoding the impact of sudden shocks: A new predictive framework for climate and complex systems

Linear Response Theory (LRT) is a cornerstone of statistical physics. It predicts how a system at (or near) equilibrium responds to small external perturbations—an idea tied to the fluctuation-dissipation relation. Essentially, if you understand a system’s natural fluctuations, you can infer how it will react to weak forcing without running a full, computationally heavy simulation.

Traditionally, LRT was developed for systems with Gaussian noise—smooth, continuous fluctuations. While this works well for phenomena like thermal fluctuations, many real-world systems also experience sudden jumps or shocks, modeled mathematically as Lévy processes. Think volcanic eruptions, market crashes, or sudden disease outbreaks.

Incorporating these sudden shocks into LRT has been a long-standing goal for statistical physicists. A recent paper published in ROPP has made a major step forward by establishing linear response theory for a broad and fundamental class of systems: mixed jump-diffusion models, which include Lévy processes.

By generalizing the fluctuation-dissipation theorem for this class of models, their response formulas allow scientists to assess how these systems respond to structural perturbations. Crucially, this works even with respect to changes in the underlying noise law itself, allowing for much tighter uncertainty quantification.

The authors—a team of researchers from Israel, UK, USA and Sweden—note that this framework provides foundational support for “optimal fingerprinting”—a statistical methodology used to confidently associate observed changes with specific causal mechanisms. By proving this approach works even under complex stochastic forcings, their findings strengthen a key aspect of the science behind climate change, grounding and expanding Hasselmann’s seminal work on detection and attribution. Importantly, this pathway for causally linking signals with acting forcings extends well beyond climate to a massive class of complex systems.

To demonstrate the theory’s predictive power, the team applied it to complex climate scenarios, including the El Niño-Southern Oscillation (ENSO)—a large-scale climate pattern in the tropical Pacific Ocean. In a more challenging application, they used their LRT to perform accurate climate change projections in the spatially extended Ghil–Sellers energy balance climate model subject to, random, abrupt perturbations. They showed that despite strong nonlinearities in model formulations—such as the complex “if-then” decision-making structures often used to parameterize ocean and atmospheric convection—LRT can still be robustly applied. This strengthens the argument for using this approach to perform accurate climate change projections and to rigorously assess a system’s proximity to tipping points.

Ultimately, this work doesn’t just improve predicting climate models’ response to perturbations; it provides a new blueprint for understanding how any complex system reacts to sudden shocks, paving the way for better predictions in biology, finance, and quantitative social sciences.

Read the full article

Kolmogorov modes and linear response of jump-diffusion models – IOPscience

Mickaël D Chekroun et al 2025, Rep. Prog. Phys. 88 127601

Love, Tito’s: vodka maker funds physics research

As a freelance writer, I’m not usually one to go down rabbit holes when it comes to research funding. That changed when Physics World spotted an intriguing source of support in a paper I was covering on exotic phases in quantum materials. The study, published in Nature Materials and led by Edoardo Baldini at the University of Texas at Austin, was partially funded by Love, Tito’s – the philanthropic arm of the Texas-based distillery, Tito’s Handmade Vodka.

The link between vodka and quantum materials was a story I was eager to explore, but a quick look at the company’s website was enough to shed some light on how this connection came about. Since its beginnings in 2015, Love, Tito’s has donated tens of millions of dollars to multidisciplinary research, with no direct ties to the company’s business operations.

Much of this funding has supported projects ranging from cancer therapies to ocean cleanups – efforts whose vital importance will be immediately clear to the public. Yet the team at Love, Tito’s also demonstrates a clear appreciation for the broader, often understated relevance of physics.

New passion

As I read more about Bert “Tito” Beveridge, the roots of this appreciation became even clearer. In 1992, between jobs in oil rigging and mortgage lending, Beveridge developed a new passion: infusing affordable, high-quality vodka using fresh ingredients.

Faced with limited funding and disinterested investors, his operation encountered hurdles at every turn. But with an engaged and analytical mindset, he tackled these challenges systematically – even studying prohibition-era photographs of distillery setups, and curating advice from colleagues in the oil industry. Working with the resources available to him, Beveridge refined his process, and within a decade, his vodka was winning national awards.

“Tito Beveridge has always been a scientist at heart,” says Sarah Everett, director of global impact and research at Tito’s Handmade Vodka.

Even before achieving success, the company had committed itself to philanthropic work – finding homes for stray dogs that wandered onto the distillery grounds, and connecting local communities with volunteer opportunities. Alongside these efforts, “we have a special focus on scientific research, through our CHEERS initiative: Creating Hope and Elevating Emerging Research and Science,” Everett says.

Invaluable grant

Today, CHEERS provides grants across a wide range of disciplines, including physics. For Edoardo Baldini’s team, a $1.4 million donation from the programme has proved invaluable in advancing their discovery of exotic quantum clock states.

“The gift helped us establish a state-of-the-art sample preparation and handling station for atomically thin materials, which directly supports experiments like those in this study by improving sample quality and reproducibility. It also supported a laser system for a time-resolved momentum microscope,” Baldini says.

These resources are already enabling the next phase of the team’s research: probing ultrafast phenomena in atomically thin systems and tracking how their electronic structure evolves following photoexcitation.

“The experimental work in Baldini’s group provides the basis for developing advanced materials for a wide range of applications, with implications that will be far reaching beyond the walls of his lab,” Everett adds.

With the foresight to recognize the societal relevance of physics – beyond the fields that typically dominate headlines – a company that once defied the odds to build a vodka brand is now helping to support research that could lead to technological solutions for some of the world’s most urgent challenges.

“By supporting fundamental research on quantum materials, this gift reflects Love, Tito’s broader interest in advancing scientific discovery that can ultimately contribute to addressing major societal challenges, including the development of future energy and information technologies,” Baldini says.

A rubbish challenge: how do we dump space junk?

Among the working satellites and telescopes orbiting our planet is a lot of rubbish. From full satellites that no longer work to tiny bolts shed as spacecraft release spent rockets, there are millions of human-made pieces of debris in the space around Earth.

The problem is a hot topic within the space community. The presence of space junk has implications for both ground and space-based astronomy; there is an impact to atmospheric science that we’re only just beginning to understand; and it also presents a threat to our highly space-reliant society.

To highlight what is being – and needs to be – done to tackle the issue of space junk, experts Katherine Courtney and Alice Gorman talked to Physics World online editor Margaret Harris as part of a Physics World Live panel discussion in November 2025.

Courtney started her career developing products and services for the telecoms industry before moving to the public sector and working in the UK government. While she was the chief executive of the UK Space Agency she came to realize the impact of space debris.

Courtney is now chair of the Global Network on Sustainability in Space (GNOSIS), which has about 1000 members from research and industry across more than 45 countries. GNOSIS aims to accelerate research and development efforts to tackle problems like space debris. Courtney also mentors start-up companies that are trying to solve these problems and does outreach with young people to educate them on the topic.

Gorman studied archaeology and for several years worked on terrestrial projects before becoming a space archaeologist. Now at Flinders University, Australia, she is known as Dr Space Junk, and focuses not just on debris in Earth orbit, but also planetary landing sites, deep space probes, terrestrial rocket launch sites and tracking stations.

Gorman’s research into space junk involves looking at objects in an environmental context, examining their cultural value and what it means to retain these objects. Along with Justin Walsh, she trained crew on the International Space Station to do what was effectively the first archaeological field survey outside Earth.

What is space junk and how much is there in orbit around Earth?

Alice Gorman: Space junk is commonly defined as any object in space that does not now or in the foreseeable future serve a useful purpose. The biggest contributors to the space debris population are the US, Russia and China.

The latest figures estimate that there are 54,000 human-made objects in orbit that are larger than 10 cm, including over 14,000 operating satellites and spacecraft. Envisat is one of the largest in that category, being 26 m long. There are also medium-size objects, which can be anything from 1–10 cm. Current statistical models estimate there are about 1.2 million objects of this size. At an even smaller scale, there’s an estimated 140 million objects 1 mm to 1 cm in size.

Not all these objects are tracked and catalogued – the number regularly tracked by Space Surveillance Networks is only about 44,870. But that doesn’t mean that’s everything there is – that’s just the things we can see and know are there.

Taking up space

Diagram showing the number and type of space debris

The count evolution of different types of human-made debris in geocentric orbit, as recorded by the European Space Agency (ESA):

  • Payload an object designed to perform a specific function in space (excluding launch functionality). This includes operational satellites as well as calibration objects.
  • Payload fragmentation debris an object that has fragmented or unintentionally released from a payload as space debris with origins that can be traced back to a unique event. This class includes objects created when a payload explodes or when it collides with another object.
  • Payload debris – an object that has fragmented or unintentionally released from a payload as space debris for an unknown reason but orbital or physical properties allow it to be traced to a source.
  • Payload mission related object an object that served a purpose for the payload and has intentionally been released as space debris. Common examples include covers for optical instruments or astronaut tools.
  • Rocket body an object designed to perform launch-related functionality. This includes the various orbital stages of launch vehicles, but not payloads which release smaller payloads themselves.
  • Rocket fragmentation debris an object that has fragmented or unintentionally released from a rocket body as space debris with origins that can be traced back to a unique event. This class includes objects created when a launch vehicle explodes.
  • Rocket debris an object that has fragmented or unintentionally released from a rocket body as space debris for an unknown reason but orbital or physical properties allow it to be traced to a source.
  • Rocket mission related object an object intentionally released as space debris that served a purpose for the function of a rocket body. Common examples include shrouds and engines.
  • Unidentified – an object that has not been traced back to a launch event.

What sorts of objects make up space junk?

Alice Gorman: First, there are whole satellites that no longer work. There are the upper stage rocket bodies that are left in orbit after they’ve delivered their payload – and in some cases are still attached. There are bolts, lens caps, fuel tanks – all kinds of debris that are released into orbit as part of a spacecraft’s mission or satellite launch.

Then you have the hundreds and thousands of fragments from exploded spacecraft. There have also been a number of anti-satellite tests that have added to the debris population. One notorious example was when China destroyed its own Fengyun-1C satellite using a missile in 2007. The event created around 3500 trackable objects and many more smaller pieces of debris, a lot of which are still in orbit.

There are also all the tiny fragments resulting from debris being continually bombarded by micrometeoroids and other bits of space junk. Plus, materials decay and erode when they’re in space.

Where is all this space debris?

Alice Gorman: The most congested area is low-Earth orbit – about 200 to 2000 km above sea level. Among the working satellites in this orbit are around 9000 that are part of SpaceX’s Starlink network.

Medium-Earth orbit (between roughly 2000 and 35,000 km) has a lot of stuff in it but also contains the Van Allen radiation belts so tends to be avoided. Then we get to geosynchronous and geostationary orbit at 35,786 km, where a lot of telecoms satellites are. Finally, beyond that is the graveyard orbit, where geostationary satellites that no longer work are sometimes boosted up to.

What hazards do these human-made objects pose to the space environment?

Katherine Courtney: First you have to consider just how dependent we are on the infrastructure that is orbiting the planet. The Internet, mobile telephones, banking networks, utility grids, emergency services, food distribution, climate change monitoring, stock markets – all of these things and so many more depend on space.

In 1978 American astrophysicist Donald Kessler proposed that if certain orbits get too congested with debris and active satellites there could be a collision that triggers a chain reaction of further collisions, making those areas of space unusable for generations. It’s what’s known as the Kessler Syndrome.

Kessler and UK astronautics engineer Hugh Lewis recently released an update to that original paper. Using European Space Agency (ESA) data on space debris, they determined that Kessler Syndrome is actually already happening at some orbits, and there are a whole range of other orbits that are now considered unstable and potentially at risk.

We don’t know for sure that we’re at that catastrophe scenario where the orbits become too congested with objects that can’t be controlled by humans. But the modelling suggests we are well on our way to that situation.

Even tiny debris can make a satellite inoperable. Satellites often just stop working, and nobody knows if that’s because they’ve had a debris strike, an electrical malfunction or some other fault. In ESA’s latest annual report on the debris population, they say that even if no further launches occur, the debris population will continue to expand because of the decay and fragmentation of those legacy rocket bodies and big defunct objects that we have no way of retrieving, reusing or controlling.

Debris isn’t the only hazard. There’s quite a complex system up there where hazards are impacting each other. Some orbits are now getting so congested that it’s getting very difficult for operators to avoid collisions, and they are having to manoeuvre satellites daily to avoid them. Starlink publishes their collision manoeuvre statistics and – when you plot it – you can see how it’s going up and up as they increase the size of their constellation.

But debris doesn’t advertise where it is. As Alice described, we only have a certain number of trackable objects – the other million plus are not trackable. So there’s an interplay between how crowded inoperable things are and how crowded manoeuvrable things are.

Distribution of space debris

Debris distribution An ESA animation released in 2019 showing the distribution of debris in orbit around Earth. The colours represent different object types – functional and dysfunctional satellites (red), rocket bodies (yellow), mission-related objects (green), and fragments (blue).

What impact does space weather have on debris?

Katherine Courtney: Every time we see an aurora in the night sky, it might look pretty but it means that the satellites in orbit around the Earth are being washed with some serious magnetic particles from the Sun. Along with the risk that a massive solar storm could knock out satellites if it was blasted in Earth’s direction, the influx of these particles increases the atmospheric drag and moves the debris in unpredictable ways. Space weather interacts with both active satellites and debris in a way that increases the uncertainties about just how many things we can safely operate up there.

This is also becoming more hazardous because constellation operators in low Earth orbit have started to introduce artificial intelligence and automated manoeuvring systems. They’ve done that because if you have 9000 satellites, you can’t employ (or don’t want to employ) 9000 people to operate them from the ground. So they have all developed automated station keeping, which is a good idea if the idea is to keep the satellite in place.

But there isn’t really a system in place whereby operators announce in advance these automated manoeuvres. Yes, they will try and contact other operators on a sort of “best efforts” basis if they are going to do planned manoeuvres, but unplanned ones are a whole new hazard.

What impact can space junk have on astronomy?

Katherine Courtney: Space junk is quite a challenge for astronomers. They have facilities that have taken 10 years to build and cost billions, but they are getting streaks in their imagery and they are losing data points. It’s a real challenge to deal with that because when these telescopes were designed, we didn’t have 13,000 satellites flying around and more than 10,000 of them moving fast in low Earth orbit.

Radio astronomy is also being interfered with because satellites are transmitting signals all the time. There is some evidence they are also leaking unintentional emissions from their electrical systems, which – again – interferes with astronomy.

And what impact does space debris have on the environment?

Katherine Courtney: There is emerging evidence that when debris re-enters the Earth’s atmosphere, it deposits particulate matter into the atmosphere that we have never experienced before. Naturally occurring matter from meteorites and micrometeorites don’t carry the metals we’ve extracted from Earth and launched into space, which are now burning up on their way back down.

And not all objects burn up. You can find some quite scary pictures of very large things that have landed on Earth – thankfully not on anybody’s head as far as we know. They usually land in places like Australia, a long way from inhabited areas, or in the middle of the Pacific Ocean – but they’re not being controlled as they descend.

A couple of years ago, a Chinese Long March rocket body re-entered the atmosphere uncontrolled. If it had arrived 15 minutes earlier, it would have landed on New York City. All you can do is cross your fingers and hope that when objects come down, they’re not landing on a bunch of people somewhere.

Space rocket debris in China

What is being done – or could be done in the future – to reduce the hazards of space junk?

Alice Gorman: This is an urgent problem that we need action on. At the moment, there are many proposals and missions in testing or development to actively remove debris from orbit, but none are actually working. For new missions, however, there has been a really interesting shift.

We used to look on the atmosphere as a natural incinerator, and all the plans to get rid of stuff in orbit involved tipping them back into the atmosphere to mostly burn up. It was considered to be the logical and most harmless way to dispose of space junk. But objects don’t always burn up, and stuff still makes it to the ground.

We also now know that these aluminium and soot particulates [created by objects burning up] in the upper atmosphere are affecting the ozone layer. We thought we had solved that problem with the 1987 Montreal Protocol, when the world came together to stop the ozone layer being destroyed.

People now realize you can’t just let satellites burn up. In fact, there are now proposals, like ESA’s Zero Debris Charter, for new missions to not create any new debris – to be “debris neutral”. That’s great for current and future missions but are people actually going to do it?

There used to be a rule – don’t leave anything in orbit for 25 years and have an end of life strategy to get rid of it. That’s now down to five years, which is good. But apparently only 40 to 60% of satellite operators followed that protocol – the rest would simply do nothing to prevent their spacecraft from contributing to the debris problem.

We rely on satellite operators and launch operators complying with these international standards and norms. And when profit is at stake, I don’t think we can have any guarantee that they will actually do that.

Katherine Courtney: When I first began focusing on space debris, I sometimes felt there was just the United Nations (UN) long-term sustainability guidelines. They were voluntary, but people are now bringing that into their national space law. There is increasing awareness of the issue and satellite operators are beginning to engage in those conversations differently.

ESA’s Zero Debris Charter is a great initiative because it sets timed targets and detailed technical specifications for how not to create additional debris with your missions. Unfortunately, it still calls for five-year design-for-demise as best practice, which maybe isn’t the answer. Missions should be designed for reuse and recycling. Or we need to not only not create debris, but use new materials that have less impact when they re-enter the atmosphere.

The International Telecommunication Union (ITU) [the UN agency for digital technologies] are really the only multilateral body that have any sort of binding powers. They allocate global radio spectrum and satellite orbits to ensure telecommunication operations run smoothly. They have started holding an annual sustainability conference where they get ITU delegates together to talk about how to fix the problem of space debris.

In 2025, the UN’s Committee on the Peaceful Uses of Outer Space (COPUOS) also set up the Expert Group on Space Situational Awareness – under the Working Group on the Long-term Sustainability of Outer Space Activities (LTS) – because one of the real problems is that we don’t have a clear enough picture of what is going on in orbit.

As Alice described, we can’t see the vast majority of the debris, but we also collect the data about debris through lots of non-standardized observations that are not interoperable and are made by different space agencies around the world. There are competing models that give different estimates and different forecasts.

We need to come up with a standardized way of monitoring the space environment and modelling what impact increasing numbers of spacecraft is having, so it’s great to hear that COPUOS has decided to encourage that.

There is also hope from the UN’s Summit of the Future in 2024. Action 56 in the resulting Pact for the Future proposes a fourth UN Conference on the Peaceful Exploration of Outer Space (UNISPACE IV) in 2027. It will focus on debris, debris mitigation and management, space traffic management, and how the world can cooperate more effectively in this area.

So what do we need to make these initiatives work?

Katherine Courtney: We currently don’t have an international treaty with binding rules. Different countries require different things of their licensed operators, don’t necessarily keep other countries informed of their activities, and some space objects don’t even go into the intended orbits at all. We need something like the ITU – a non-military, cross-border independent authority – that could monitor and enforce standards internationally.

A little ray of light for me is that NASA recently received an e-mail from the Chinese National Space Administration to warn them of a potential collision between a Chinese object and a NASA mission. It was the first time that had happened. Communicating to avoid collisions should be the bare minimum to ensure a more sustainable space environment.

ESA space junk grabber

What missions and tests have happened or are in the pipeline to deal with individual pieces of debris?

Katherine Courtney: The most advanced to date has been Japan’s Astroscale ADRAS-J mission, which has demonstrated its ability to safely approach a target object and examine it closely. Meanwhile, ESA is launching its ClearSpace-1 mission in 2029 to clear an old PROBA-1 satellite from low-Earth orbit.

These missions are tricky because the first rule is don’t make any more debris – but you have an object that is tumbling, maybe fragmenting and could be carrying fuel. You have to be very careful to prove that you have the technology that can safely capture that object. For ClearSpace-1, they are going to use a sort of robotic grappling mechanism, while Astroscale will use a magnetic solution to grab things.

The UK government has also announced further funding to not just remove one UK licensed object from orbit, but go back and remove a second. This is quite a technical feat – you have to safely take an uncooperative debris object, lower it to a point where Earth’s gravity will cause it to deorbit, and then go back and get another one, all without bumping into anything on the way.

China and Russia have also demonstrated their ability to safely approach objects but they haven’t published the outcomes of those missions. China’s efforts have been defence-focused but they have also started to look at commercial operations in that area. In fact, there are quite a few companies now really interested in being involved in this space.

Do some craft have heritage value?

Vanguard 1, the world’s first solar-powered satellite

Alice Gorman: There have been proposals to test some debris removal technologies on older spacecraft on the basis that they might one day be a risk and they’re old so nobody cares. But to me many such craft have incredible heritage value.

People sometimes say to me, “But Alice, we can’t leave them there, they’re junk”. However, if they’re not currently collision risks, we don’t have to do anything about them. Instead we can assess their cultural heritage value. We can rank the objects so we can say, if something has to be removed, this object has a lower value than this one.

So, from my perspective, every nation needs to have a look at its heritage assets in orbit, assess their significance, and from that point decide what needs to be done. And in heritage terms you don’t do anything until you need to – the place where something is, is an important part of its cultural significance.

You could argue that the definition of space junk is something which has no use, but these objects actually do have a purpose. Their purpose is to connect people to their history in space and to space as a place. I want to see all proposals for active debris removal incorporate cultural heritage management.

How long might it take before the orbits get so crowded that we really just can’t put anything else into orbits?

Katherine Courtney: I’ve not seen a forecast on how long this will take, but currently people are launching satellites weekly and in batches. According to ITU filings, there are over a million permissions to operate in certain spectrum on file now. So, over the next 10 years, a million more satellites could theoretically be launched, which could be problematic.

Imagine a motorway where everybody can drive at whatever speed they want with no indicator lights. If your car broke down and you just left it in the middle of the road, that would soon become an unusable environment. Space orbits would soon be like that.

But we can use orbital capacity more efficiently. It’s just that it requires a great force of global collaboration to solve that problem because space, by definition, is a place without national borders.

My view is that 90% of space activity today is commercial. Businesses have to manage these hazards and risks or else they will close down. In fact, I see a day where something happens that makes everybody sit up. I call it the Exxon Valdez moment, a disaster that is small enough to hurt some operators financially, but not big enough that we have a Kessler syndrome and we all have to wait 200 years before we can use that space again. I think that’s when the economic incentives will be there for people to actually start collaborating.

Five years ago you never heard an operator say regulation was a good thing – I now regularly attend events where operators ask for regulation. So I think we can solve these problems.

Are there any alternative approaches to avoid more space debris?

Alice Gorman: Although we depend on space, we are in fact neglecting terrestrial infrastructure. The Starlink satellites, for example, have been strongly pushed in because they promise to provide communication to remote places – but only because there has been no investment in terrestrial infrastructure. We can choose to pull some functions back from space. We’re not completely committed to space for all these functions, and we shouldn’t be so dependent on space.

  • This article is based on the 10 November 2025 Physics World Live event, which you can watch on demand here

International scientists head into the fast-lane of Denmark’s burgeoning quantum ecosystem

Shared purpose Maria Cerdà Sevilla and her colleagues at Quantum DTU in Lyngby are shaping the trajectory of technology translation and commercial innovation in quantum science. (Courtesy: Bax Lindhardt/DTU)

Denmark, it seems, is increasingly walking the walk, not just talking the talk, when it comes to quantum science and innovation. Structurally, the country’s “quantum ecosystem” is on a roll, with more than 75 organizations now actively engaged around a shared national mission via the Danish Quantum Community, a network of start-ups, scale-ups, incumbent technology companies, investors, research institutions and government agencies.

Money is greasing the wheels. In October last year, Denmark launched 55North, the world’s largest venture-capital fund dedicated exclusively to quantum technologies and applications. Headquartered in Copenhagen and backed by the Novo Nordisk Foundation and the Export and Investment Fund of Denmark (EIFO), the fund opened with a capital injection of €134 million (and a target base of €300 million) to back high-growth companies in the nascent quantum supply chain – within Denmark and beyond.

Workforce development is also mandatory – a strategic acknowledgement that Denmark must scale the “quantum talent pipeline” if it is to translate advances in fundamental science and applied R&D into next-generation quantum technologies. Capacity-building is well under way as Danish universities work with industry and government partners to train a skilled and diverse quantum workforce of “all the talents”, with recruitment of international scientists and engineers seen as fundamental to Denmark’s long-term quantum ambitions.

Joined-up thinking in quantum

A case study in this regard is Maria Cerdà Sevilla, head of Quantum DTU, the Center for Quantum Technologies at the Technical University of Denmark (DTU). Located in Lyngby, just north of Copenhagen, Quantum DTU coordinates the research activities of around 300 quantum scientists, working across 12 departments at DTU and focused around five main research themes: quantum computing, quantum communications, quantum sensing, advanced materials as well as cross-cutting initiatives in nanofabrication and next-generation quantum chips.

“The goal is to ensure that DTU is not merely participating in quantum science but also shaping the trajectory of technology translation and commercial innovation in the field,” explains Cerdà Sevilla. Put another way: Quantum DTU is all about outcomes versus three broad-scope metrics: scientific depth (world-class research in quantum physics and engineering); building the quantum ecosystem (integrating diverse research disciplines, developing infrastructure, plus education and training); and, finally, readiness for market deployment (meaning responsible and scalable implementation of quantum technologies).

“Our success will be defined not only by high-impact publications and prototypes, but whether DTU – and, by extension, Denmark – has established ‘durable capacity’ in quantum technologies and applications,” says Cerdà Sevilla.

It’s better to travel

For her part, Cerdà Sevilla is the quintessential pan-European scientist, albeit taking the “road less-travelled” to her role at Quantum DTU. After completing a PhD in particle physics at the University of Liverpool, UK, she moved on to postdoctoral research positions in Germany – at Humboldt University of Berlin and the Technical University of Munich – before a mid-career pivot into research strategy and innovation management.

Copenhagen calling Talented quantum scientists and engineers from across Europe are eyeing career opportunities in Copenhagen, one of the world’s most liveable, sustainable cities. (Courtesy: Daniel Rasmussen/A State of Denmark)

“While I no longer do research myself, I work with quantum scientists every day at DTU,” explains Cerdà Sevilla. That engagement extends to other stakeholders, including policy-makers, funding agencies, manufacturers in the quantum supply chain, as well as industrial end-users looking to deploy quantum technologies. “My role is essentially about leadership and strategic alignment,” she adds. “That means defining research priorities, understanding what we’re doing at a granular level, and ensuring Quantum DTU’s scientific efforts translate into a joined-up action plan across diverse specialisms.”

One of the most powerful aspects of Quantum DTU – indeed the wider quantum sector in Denmark – is this sense of shared purpose. “The quantum community here is internationally connected and recognized as well as being locally cohesive,” notes Cerdà Sevilla. “As an international scientist, it’s a given that you will get to conduct leading-edge research here; at the same time, you will also have a voice in shaping priorities at the departmental, institutional and even national level.”

By extension, institutional and interpersonal trust are defining features of Denmark’s research culture, enabling scientific collaborations and long-term initiatives to take shape organically without undue friction or hierarchical blockers. That same mindset informs life outside the laboratory and the workplace.

“The work-life balance in Denmark is great, though productivity is mandatory,” says Cerdà Sevilla. “Danish people work very hard, but they also understand the need for downtime with family and friends to ensure creativity and clarity of thinking. Overall, there’s a culture of psychological safety in the research community – an implicit acknowledgement that teams function best when individuals feel secure with their colleagues and management.”

Heading north

Another international scientist making an out-sized impact in the Danish quantum community is Francesco Borsoi, an assistant professor of physics and spin qubit pilot-line lead within the Novo Nordisk Foundation Quantum Computing Programme (NQCP), part of the renowned Niels Bohr Institute (NBI) at the University of Copenhagen (KU).

The NQCP is a 12-year collaborative research effort, backed with €200 million of funding through till 2035, to develop fault-tolerant quantum computing hardware and quantum algorithms for chemical and biological challenges in the life sciences. Underpinning the programme is a technology-agnostic approach to hardware development and the infrastructure required to support it (and currently implemented across four qubit pilot lines).

“Right now, my research at NBI explores the development, control and scaling aspects of solid-state quantum devices and investigation of the properties that may enable universal quantum computing,” explains Borsoi. While his focus, in large part, is on quantum-confined spins in semiconductor quantum dots, Borsoi works closely with the other three NQCP pilot-line teams developing platforms based on superconducting, photonic and neutral-atom technologies.

As an assistant professor, Borsoi also plays a proactive role in training the next generation of quantum scientists and engineers. Notably, he is the lead creator of a hands-on experimental course on advanced qubit technologies – part of a joint KU/DTU Masters programme in quantum information science that’s helping Denmark to scale its quantum workforce.

Here for the long term

Like Cerdà Sevilla at Quantum DTU, Borsoi’s back-story reflects the pan-European mobility of scientific talent. He received his MSc in condensed-matter physics from the University of Pisa, Italy, in 2016, before moving on to QuTech at the Delft University of Technology, The Netherlands, where he completed a PhD in applied physics (on semiconductor/superconductor quantum heterostructures) followed by three years of postdoctoral research (and a shift in direction to focus exclusively on semiconductor quantum-dot qubits).

After six years at QuTech, Borsoi wasn’t actively seeking a move to another institution – let alone another country – but was attracted by the NQCP opportunity and, as he puts it, “the chance to build from the ground up and be part of something this ambitious”.

He’s been in Copenhagen for 18 months and has settled well, both within NBI and outside. “Day to day,” he says, “I get to work with talented colleagues and students across NBI and KU, plus I get to develop my career in one of the world’s most liveable, sustainable cities. Five-star food scene, amazing architecture, lots of green space and excellent public transport – what’s not to like?”

Pointing the way Francesco Borsoi moved to Copenhagen for “the chance to build from the ground up and be part of something this ambitious” within the Novo Nordisk Foundation Quantum Computing Programme at the Niels Bohr Institute. (Courtesy: Daniel Rasmussen/A State of Denmark)

Back in the laboratory, meanwhile, Borsoi also engages extensively with domain experts working on the three other qubit pilot lines – a systematic and collaborative research model that underpins NQCP’s approach to quantum science. “The core enabling technologies may differ,” notes Borsoi, “but many of the design, engineering and scalability challenges are common to all the pilot lines. I guess we all talk the same language when it comes to the NQCP mission.”

For Borsoi, the transition to NBI and Denmark’s quantum community could hardly have gone better and already feels like a long-term commitment. “Government, private equity and philanthropic foundations are all making big investments in quantum,” he concludes, “so there’s no shortage of opportunities in Denmark for talented quantum scientists and engineers seeking to develop their careers in a university or industry setting.”

  • Visit Science Hub Denmark for more information on quantum job opportunities in Denmark. This nationally coordinated initiative aims to enhance the global visibility of Danish research and career opportunities in the natural sciences, engineering and life sciences. Delegates attending the APS Global Physics Summit (15-20 March) in Denver, Colorado, can find out more by visiting the Danish Quantum Pavilion at the co-located industry exhibition.

 

At low exciton density, a superfluid suddenly stops flowing

Physicists at Columbia University in the US say they may have found evidence for a phenomenon in which a superfluid suddenly stops flowing inside a solid-state material. If confirmed, the finding – made in experiments using two atom-thin layers of graphene – could be the first superfluid-to-insulator phase transition ever observed in a naturally occurring material.

“For the first time, we’ve seen a superfluid undergo a phase transition to become what appears to be a supersolid,” says Cory Dean, who led the new study. “It’s like water freezing to ice, but at the quantum level.”

Supersolids are a hypothetical state of matter that can be both liquid- and solid-like at the same time – that is, they have a crystal structure and superfluid properties. In this description, first put forward by physicists in the 1970s, the crystal lattice and superfluidity are all part of the same phase coherent ground state and are not two separate systems, explains Dean.

In the new work, the researchers studied graphene, which is a sheet of carbon just one atom thick. When two of sheets of graphene are placed atop each other, they can be manipulated so that one layer contains extra electrons and the other extra holes.

The electrons and holes can combine to form quasiparticles known as excitons, which can then travel through the graphene bilayer as a superfluid when a strong magnetic field is applied.

Graphene, sometimes called the “wonder material”, is ideal for such fundamental physics studies because its properties can be fine-tuned by adjusting parameters like temperature, the applied electromagnetic fields and even the distance between the layers.

Controlling the density of excitons

In their experiments, Dean and co-workers were able to move the excitons in their bilayer samples by applying oppositely charged electric fields to the two layers. This, explains Dean, causes the positive and negative parts of each exciton to be pulled in the same direction, allowing them to indirectly drive and detect exciton flow. This ability to control layer imbalance allowed the team to tune the exciton density. Normally, such a process is difficult to achieve because excitons are electrically neutral and do not respond directly to ordinary electrical measurements, which makes tracking their motion difficult.

Thanks to their technique, which they detail in Nature, the researchers found that at high densities, the excitons behaved like a superfluid. At lower densities, however, these excitons “froze” and the superfluid became insulating. Even more striking, says Dean, is that warming the system restored the superfluid flow. “This result suggests that a supersolid-like phase emerges spontaneously, driven solely by particle interactions.”

The Dean lab has been studying the superfluid exciton phase for many years, though most of their work to date focused almost exclusively on the “layer balanced” condition that occurs when there is an equal density of electrons and holes in the two graphene layers. More recently, they began to study the layer imbalanced regime, which has been much less explored in experiments.

“To our surprise we found that under very large imbalance, the exciton transitions to an insulating state beyond some critical imbalance,” says Dean. “This observation alone could have many trivial explanations, but the real shock came when we found that upon heating the system, the superfluid is recovered.”

This behaviour, which has been discussed in some theoretical literature, has no precedence in any existing experiments of superfluidity, he explains, so it is something we should try to better understand.

“To view the situation in the opposite sense: when cooling a fluid and it transitions to a superfluid, the superfluid is already in a thermodynamic ground state. So why upon further cooling, should it undergo a transition to any other phase?” asks team member Jia Li. “We eventually realized that in our experiment, the role of layer imbalance is really a tuning of the exciton density, and the insulating phase onsets when the exciton density crosses a critical value,” he tells Physics World. “Once we had adopted this view, understanding the observed phase transition, and how it fits in with existing theoretical predictions, fell into place.”

A true supersolid or not?

While the researchers say they have firmly established the existence of an insulating state within the superfluid phase diagram, whether this state is truly a supersolid or some other as-yet unknown quantum ground state remains less clear. The challenge with understanding an insulating material is that it becomes more difficult to probe its behaviour, says Dean. “This is made even more difficult by the experimental requirements to stabilize the insulating phase: we need ultraclean samples, low temperatures and high magnetic fields.”

And the difficulties do not end there: “having to work with strong magnetic fields also limits what experimental probes we can use,” he adds. “To progress further, we need to develop new tools to probe the insulating state – for example, we are developing a scan probe technique that we hope can directly image and spatially map the exciton condensate.”

“We have also been working on realizing this condensate in material systems with strong interactions that do not require magnetic fields,” he reveals.

Wanted: an electrical grid that runs on 100% renewable energy

With the conflict in Iran and the resulting closure of the Strait of Hormuz pushing oil and gas prices upwards, the prospect of a world that runs on 100% renewable energy seems even more attractive than usual. Before we can get there, though, experts in a range of fields say we’re going to need to solve a few physics problems – including one that goes straight back to Maxwell’s equations.

Unlike energy that comes from processes such as burning fossil fuels, sending water downhill through turbines, or harnessing the heat from nuclear reactions, the supply of wind and solar energy varies in ways we cannot control. To complicate matters further, consumer demand also varies, and the two variations “do not necessarily match in time or in space” observes Michael Jack, a physicist at the University of Otago in New Zealand.

Speaking on Monday at the American Physical Society’s Global Physics Summit in Denver, Colorado, Jack explained that there are two ways of making sure demand matches supply in an all-renewable grid. The first is to smooth out demand over time, for example by storing energy in batteries and using it when the wind isn’t blowing or the Sun isn’t shining. The second is to smooth out demand over space, for example by creating a grid that connects large numbers of consumers. “It’s very unlikely that all consumers’ demand will peak at the same time,” Jack noted.

To understand how peak demand scales with the number of consumers, Jack and his colleagues are using tools from an area of mathematics called extreme value theory. As its name implies, the goal of extreme value theory is to understand the probability of events that are either extremely large or extremely small compared to the norm. Once we can do that, Jack told the APS audience, we’ll be able to build renewable energy systems that deal efficiently with periods of peak demand.

“The opposite of quantum mechanics”

Another speaker in the same session, Charles Meneveau, is working on the supply side of the variability problem. As a fluid dynamics expert at Johns Hopkins University in Maryland, US, his goal is to understand how turbulent gusts of wind lead to fluctuations in the power output of wind farms – a problem he described as “the opposite of quantum mechanics” because “it’s intuitive and we feel like we understand it, but we can’t compute it”.

Meneveau and his collaborators began by building a micro-scale wind farm, sticking it in a wind tunnel and monitoring how it behaved. More recently, they’ve added computer simulations to the mix, generating around a petabyte of simulated turbulence data.

As expected, these studies showed that the power output of an array of turbines fluctuates much less than the output of a single turbine. However, an array’s output does spike at intervals set by the rotation frequency of the turbine blades, and also when gusts of wind propagate from one turbine to the next. Meneveau has developed a model that can predict this second type of spike, and he’s now working to extend it to floating offshore wind farms, which experience watery turbulence as well as the windy kind.

Everything under control

The third speaker in the session, Bri-Mathias Hodge, is an energy systems engineer at the University of Colorado, Boulder. He’s interested in ways of ensuring that renewable energy systems remain stable in the face of disturbances that could otherwise send the grid into a tailspin, leading to blackouts like the one that struck the Iberian Penninsula in 2025.

In traditional grids dominated by thermal energy sources, Hodge explained that one of the main ways of maintaining stability is to use devices called synchronous machine generators. These are essentially large rotating masses that all spin at the same rate: the frequency of the grid, which in the US is 60 Hz. When coupled to an AC power system, they give the system a degree of inertia, enabling it to resist potentially damaging fluctuations in the supply of electricity.

These devices have existed for 100 years, and Hodge says our current power system is designed around them. But because renewable energy generation is primarily DC rather than AC, an all-renewable grid will require a fundamentally different approach. “We have to reimagine what the system looks like when we have 100% renewable energy,” Hodge told the APS audience.

The solution, Hodge explained, is to replace synchronous machine generators with electronic inverters. These devices have the advantage of reacting much faster to system fluctuations. However, they also come with a big disadvantage. Unlike massive spinning objects that follow ponderous Newtonian physics, they don’t react automatically. They have to be told, and Hodge says that will require completely different control systems than the ones used in today’s electrical grids.

Return of Maxwell’s equations

While studying this problem, Hodge realized that the engineers who designed electrical grids back in the 1960s made an important simplifying assumption. Because they were working with a system composed entirely of thermal, synchronous generators (and because they were doing all their calculations with slide rules), they treated voltage as being separate from frequency, even though the two are inherently coupled. In other words, they treated the grid as an electromechanical network rather than an electromagnetic one.

To understand how this simplification plays out in a renewable-dominated grid, Hodge and colleagues went back to Maxwell’s equations. Specifically, they focused on what these equations have to say about the momentum associated with a mass that is moving around in an electromagnetic field. In an electrical grid controlled by large inertias from thermal generators, this momentum isn’t important. But in a renewable-dominated grid, Hodge says it can’t be ignored.

He and his colleagues have therefore developed a new model of electric power networks that highlights the significance of this electromagnetic momentum and restores the link between frequency and voltage dynamics. Ultimately, though, Hodge says that avoiding blackouts in an all-renewable energy system will require advances in simulation technologies. “We need to improve our decision-making processes on a whole range of timescales, from seconds to years,” he concluded.

Ask me anything: Giannis Zacharakis – ‘The ability to pursue questions that genuinely interest you is a privilege’

Giannis Zacharakis is a research director at the Institute of Electronic Structure and Laser (FORTH) in Greece, where he leads the Laboratory for Biophotonics and Molecular Imaging. Zacharakis has served as president and vice-president of the European Society for Molecular Imaging. His main focus is on developing key enabling technologies for imaging biological processes in living systems.

Zacharakis is also the CEO of the precision photonics spin-off Kymatonics. The company recently secured a highly competitive €2.1m European Innovation Council (EIC) Transition Open grant, to advance the development and commercialization of their innovative wavefront-shaping objective lens.

What skills do you use every day in your job?

My everyday work involves both hard and soft skills, which are equally important for a successful career.

At its core, my work is about asking questions and defining the path to discovery, through scientific knowledge and rigour. This requires being able to break down complex physical and biological problems into manageable and measurable components under certain hypotheses. Much of my day therefore involves analytical thinking and judgement: evaluating whether an observed effect is physically meaningful or an artefact of instrumentation or data processing. That defines the path forward.

Problem solving constantly requires creativity and thinking out of the box, because experiments rarely behave exactly as planned. You need patience, persistence and the ability to stay calm when instruments misbehave or data contradict expectations.

Communication is another central skill. I regularly explain technical concepts to students, collaborators from other disciplines, and biologists or clinicians who may not share the same vocabulary. Translating physics into accessible language, without oversimplifying the science, is something I consciously practise and it takes time and effort to achieve.

Project management also plays a surprisingly large role. Co-ordinating experiments, supervising students, meeting deadlines for proposals or manuscripts, and balancing long-term research goals with short-term deliverables requires structured planning.

Finally, mentoring is an important part of my routine. Guiding students and young scientists through experimental design, encouraging independent thinking, and helping them develop scientific confidence is both a responsibility and an integral component of academic work.

Essentially, while physics provides the foundation, my job relies on a blend of analytical rigour, practical problem-solving, communication and leadership.

What do you like best and least about your job?

What I value most is intellectual freedom: the ability to pursue questions that genuinely interest you is a privilege. There is something deeply satisfying about seeing a concept move from hypothesis to experimental evidence. Even incremental progress can feel meaningful when it clarifies a mechanism or resolves ambiguity.

I also appreciate the interdisciplinary environment. Working at the interface of physics, biology and biomedicine forces me to continuously learn and think beyond boundaries. It prevents intellectual stagnation and keeps curiosity alive.

Mentoring students is another highlight. Watching someone gain confidence, moving from following instructions to proposing their own ideas, is deeply rewarding. Research training is not only about technical knowledge; it is also about developing judgement and rigour.

On the more challenging side, uncertainty is a constant companion. Funding cycles; competitive grant applications and proposal rejections; and the unpredictability of research outcomes can be demanding. Not every idea works, and not every effort translates into immediate output. Maintaining momentum despite setbacks requires persistence and resilience.

Administrative responsibilities can also fragment time and reduce deep focus. Balancing research, supervision and institutional duties often requires careful prioritization.

What do you know today that you wish you knew when you were starting out in your career?

I wish I had understood earlier that uncertainty is not a sign of inadequacy but is the natural state of research. Early in my career, I expected clarity to come quickly if I worked hard enough. In reality, meaningful progress often requires extended periods of ambiguity. Learning earlier to tolerate that and even see it as productive would have reduced unnecessary self-doubt.

I also underestimated the importance of communication. Being technically correct is not enough; ideas need structure, clarity and narrative. Writing well and presenting clearly are not secondary skill; they are core scientific tools.

Another lesson is that collaboration is essential. Scientific progress increasingly happens at disciplinary boundaries with impactful discoveries emerging at interfaces. Engaging with people who think differently challenges assumptions and strengthens work.

Finally, remember that career paths are less rigid than they appear. There is rarely a single “correct” trajectory. Developing transferable skills, analytical thinking, adaptability, mentoring and project management provides resilience across different opportunities.

I would tell my younger self to focus less on short-term milestones and more on building depth, clarity of thought and professional relationships. Those foundations endure longer than any single milestone.

Single metasurface could generate record numbers of trapped neutral atoms

Physicists in China have demonstrated that a structure called an optical metasurface can individually trap up to 78,400 neutral atoms – a promising development in efforts to build a large-scale quantum computer. The method, which is similar to one demonstrated independently by a team at Columbia University in the US, could help overcome a troublesome bottleneck for computers that use neutral atoms as their quantum bits (qubits).

Arrays of trapped neutral atoms are widely employed in physics research, and they are a promising platform for quantum computing. Their main drawback is scalability, explains physicist Zhongchi Zhang, who co-led the new study together with his Tsinghua University colleague Xue Feng. The components normally used to make such arrays, such as spatial light modulators (SLMs) and acousto-optic deflectors (AODs), can only create around 10,000 atom traps at any one time, and are thus limited to a maximum of 10,000 atomic qubits.

Flat optical surfaces made up of 2D arrays of metasurfaces

In their work, which is detailed in Chinese Physics Letters, Zhang and colleagues replaced SLMs and AODs with two-dimensional arrays of metasurfaces – artificial nanostructures that manipulate light in much the same way as traditional optics, but with far less bulk. To do this, they used a method known as a weighted Gerchberg-Saxton algorithm to design a metasurface made up of nanoscale pillars that can transform a single input laser beam into a 280 x 280 array. They then constructed this metasurface from silicon nitride using electron-beam lithography and reactive ion etching. Both methods are compatible with standard complementary metal–oxide–semiconductor (CMOS) manufacturing techniques and are thus highly reproducible.

The result is a set of nanoscale, light-manipulating, pixel-like structures that act like a superposition of tens of thousands of flat lenses. When a laser beam hits these “lenses”, they produce a unique pattern that contains tens of thousands of focal points. As long as the laser light is intense enough, each of these focal points can be used to trap and manipulate atoms via a well-established technique called optical tweezing.

Zhang explains that the main advantage of trapping atoms this way is that the metasurface generates the array of optical tweezers on its own, without the need for additional bulky and expensive optical components such as microscope objectives to focus the light. Another benefit is that such arrays are very robust to high laser intensities, which are a prerequisite when the goal is to trap hundreds of thousands of atoms. Indeed, Zhang says that arrays of this type can handle powers several orders of magnitude higher than is possible with arrays made using SLMs and AODs. The intensity of the light is also highly uniform (90.6%) across the array, and individual beams feature an Airy disk-like profile with an average first dark radius of around 1.017 µm – parameters that Zhang says are “ideal for trapping single atoms”.

Improving fault-tolerant quantum computing

“Our work addresses the critical need for scalable physical qubit arrays required for improving ‘fault-tolerant’ quantum computing and making it more robust to errors,” Zhang tells Physics World. “Since quantum error-correcting codes may call for hundreds of physical qubits to build a single logical qubit, scalability here becomes paramount.”

Researchers at Columbia University also recently demonstrated an atom-trapping array that replaced SLMs and AODs with flat optical metasurfaces. But whereas the Columbia team managed to create 360,000 tweezers with extreme pixel efficiency (around 300 pixels/tweezer, with over 95% uniformity) the Tsinghua University group prioritized the array’s robustness at higher laser power, achieving around 1354 pixels/tweezer. Both studies have validated the use of metasurfaces as a scalable platform beyond the limitations imposed by AODs and SLMs, says Zhang.

Spurred on by their preliminary results, Zhang and colleagues report that they are now fabricating a 19.5 mm-diameter metasurface designed to generate approximately 18,000 optical trapping sites. Their goal is to place this metasurface outside the vacuum chamber that contains the trapped atoms. “Such an external configuration represents a significant departure from conventional approaches and is expected to enable the trapping of over 10,000 atoms, surpassing current records while substantially simplifying the experimental setup,” Zhang explains.

The team is also developing a next-generation integrated architecture in which metasurfaces will replace the fluorescence imaging microscopes used to characterize trapped atoms, as well as the optical tweezer arrays used to trap them. “This approach aims to create a completely new system paradigm for neutral-atom quantum computing that eliminates the need for traditional bulky optics, enabling unprecedented compactness and scalability for future quantum processors,” Zhang says.

Physicists demonstrate long-predicted exotic magnetic phases in 2D material

Physicists in the US and Taiwan have performed new experiments that verify long-standing theoretical predictions of how long-range magnetic order can emerge in atomically thin materials. Led by Edoardo Baldini at the University of Texas at Austin, the researchers showed how the transformation occurs through two distinct phase transitions – possibly paving the way for new generations of ultracompact magnetic materials.

Atomically thin two-dimensional (2D) materials are widely studied for their diverse electrical, optical, mechanical and thermal properties. So far, however, their magnetic properties have generally remained far more elusive. Underlying the problem are inevitable thermal fluctuations, which make it extremely difficult to sustain magnetic order over distances larger than atomic scales.

For decades, theorists have investigated a possible exception to this rule in “2D XY” systems: featuring flat arrays of spins that can rotate continuously within the plane and interact with neighbouring spins. One particularly interesting extension of this model describes how a phase transition can occur when these spins become locked into one of six preferred directions, corresponding to the symmetry of the crystal lattice.

“In the 1970s, theoretical work showed that 2D XY magnetic systems with this six-fold anisotropy could exhibit an unusual sequence of phase transitions described by the six-state ‘clock model’, including an intermediate Berezinskii–Kosterlitz–Thouless (BKT) phase,” Baldini explains. “These ideas became central to the theory of low-dimensional magnetism.”

Since these theories emerged, however, such effects have proven far more challenging to observe in real 2D materials.

Verifying the predictions

To tackle this challenge, Baldini’s team turned to a technique involving nonlinear optical microscopy, based on second-harmonic generation: where a material probed by intense light at one frequency emits secondary light at twice that frequency. Crucially, the polarization of this secondary light is highly sensitive to magnetic behaviour. This allowed the researchers to examine magnetic order in the atomically thin antiferromagnet nickel phosphorus trisulphide (NiPS3) without disrupting the system with invasive electrical contacts.

“By tracking how the optical response evolves with temperature, we were able to directly follow successive magnetic phase transitions and determine the universality class of the emergent magnetic phases,” Baldini explains. “In addition, polarization-resolved measurements allowed us to reconstruct the symmetry of the magnetic order parameter.”

As the researchers cooled the material, their measurements revealed two key phase transitions – each occurring suddenly below a distinct critical temperature. “The first transition marks the onset of a BKT phase, an unusual state in which magnetic correlations extend over long distances without forming conventional long-range order,” Baldini says.

In this phase, the material forms bound pairs of vortices and antivortices: topological defects in the spin field triggered by thermal fluctuations. Within these swirling patterns, spins collectively curl around single points, either in clockwise or anticlockwise directions.

At higher temperatures, these swirling patterns are isolated and can roam freely through the material, disrupting the emergence of long-range magnetic order. But when vortices and antivortices are bound together, their disruptive influences largely cancel each other out: allowing spin correlations to persist over longer distances, while still remaining sensitive to thermal fluctuations.

As the researchers cooled the NiPS3 further they observed a second phase transition, in which vortices and antivortices are suppressed and a six-state clock phase emerges. But this symmetry was constrained even further: across the whole system the six possible spin orientations could themselves be arranged in just two distinct ways. This interplay between six- and two-fold anisotropy ultimately gives rise to stable long-range magnetic order, just as earlier theories had predicted.

Through their experimental validation, the team’s results shed new light on the rich and unexpected magnetic phenomena that can emerge in 2D materials. Revealing two distinct phases, the work highlights how magnetism can arise in fundamentally different ways to that seen in more familiar three-dimensional materials.

“More broadly, these results establish atomically thin magnets as a powerful platform for exploring topological phase transitions and may inspire new approaches to controlling magnetism at the nanoscale for future ultracompact technologies,” Baldini says.

The findings are reported in Nature Materials.

Copyright © 2026 by IOP Publishing Ltd and individual contributors