Skip to main content

Long-distance quantum sensor network advances the search for dark matter

A new of way of searching for dark-matter candidate particles called axions has produced the tightest constraint yet on how they can interact with normal matter. Using a two-city network of quantum sensors based on nuclear spins, physicists in China narrowed the possible values of a parameter known as axion-nucleon coupling below a limit previously set by astrophysical observations. As well as insights on the nature of dark matter, the technique could aid investigations of other beyond-the-Standard-Model physics phenomena such as axion stars, axion strings and Q-balls.

Dark matter is thought to make up over 25% of the universe’s mass, but it has never been detected directly. Instead, we infer its existence from its gravitational interactions with visible matter and its effect on the large-scale structure of the universe.

While the Standard Model of particle physics does not incorporate dark matter, several physicists have proposed ideas for how to bring it into the fold. One of the most promising involves particles called axions. First hypothesized in the 1970s as a way of explaining unresolved questions about charge-parity violation, axions are chargeless and much less massive than electrons. This means they interact only weakly with matter and electromagnetic radiation.

According to theoretical calculations, the Big Bang should have produced axions in abundance. During phase transitions in the early universe, these axions would have formed topological defects – defects that study leader Xinhua Peng of the University of Science and Technology of China (USTC) says should, in principle, be detectable. “These defects are expected to interact with nuclear spins and induce signals as the Earth crosses them,” Peng explains.

A new axion search method

The problem, Peng continues, is that such signals are expected to be extremely weak and transient. She and her colleagues therefore developed an alternative axion search method that exploits a different predicted behaviour.

When fermions (particles with half-integer spin) interact, or couple, with axions, they should produce a pseudo-magnetic field. Peng and colleagues looked for evidence of this interaction using a network of five quantum sensors, four in Hefei and one in Hangzhou. These sensors combined a large ensemble of polarized rubidium-87 (87Rb) atoms with polarized xeon-129 (129Xe) nuclear spins.

“Using nuclear spins has many advantages,” Peng explains. “These include a higher energy resolution detection for topological dark matter (TDM) axions thanks to a much smaller gyromagnetic ratio of nuclear spins; substantial spin amplification owing to the high ensemble density of noble-gas spins; and efficient optimal filtering enabled by the long nuclear-spin coherence time.”

The USTC researchers’ setup also has other advantages over previous laboratory-based TDM searches, including the Global Network of Optical Magnetometers for Exotic physics searches (GNOME). While GNOME operates in a steady-state detection mode, the USTC researchers use a detection scheme that probes transient “free-decay oscillating” signals generated on spins after a TDM crossing. The USTC team also implemented a dual-phase optimal filtering algorithm to extract TDM signals with a signal-to-noise ratio at the theoretical maximum.

Peng tells Physics World that these advantages enabled the team to explore regions of TDM parameter space well beyond limits set by astrophysical searches. The transient-state detection scheme also enables sensitive searches for TDM in the region where the axion mass exceeds 100 peV – a region that GNOME cannot access.

Most stringent constraints

The researchers have not yet recorded a statistically significant topological crossing event using their setup, so the dark matter search is not over. However, they have set more stringent constraints on axion-nucleon coupling across a range of axion masses from 10 peV to 0.2 μeV. Notably, they calculated that the coupling strength must be greater than 4.1 x 1010 GeV at an axion mass of 84 peV. This limit is stricter than those obtained from astrophysical observations, though Peng notes that these rely on different assumptions.

Peng says the technique developed in this study, which is published in Nature, could lead to the development of even larger, more sensitive networks for detecting transient spin signals such as those from TDM. It also opens new avenues for investigating other physical phenomena beyond the Standard Model that have been theoretically proposed, but have so far lacked a pathway for experimental exploration.

The researchers now plan to increase the number of sensor stations in their network and extend their geographical baselines to intercontinental and even space-based scales. Peng explains that doing so will enhance the network’s detection sensitivity and boost signal confidence. “We also want to enhance the sensitivity of individual sensors via better spin polarization, longer coherence times and advanced quantum control techniques,” she says. Switching to a ³He–K system, she adds, could boost their current spin-rotation sensitivity by up to four orders of magnitude.

Pathways to a career in quantum: what skills do you need?

Careers in Quantum, which was held on 5 March 2026, is an unusual event. Now in its seventh year, it’s entirely organized by PhD students who are part of the Quantum Engineering Centre for Doctoral Training (CDT) at the University of Bristol in the UK.

As well as giving them valuable practical experience of creating an event featuring businesses in the burgeoning quantum sector, it also lets them build links with the very firms they – and the students and postdocs who attended – might end up working for.

A clever win-win if you like, with the day featuring talks, panel discussion and a careers fair made up companies such as Applied Quantum Computing, Duality, Hamamatsu, Orca Computing, Phasecraft, QphoX, Riverlane, Siloton and Sparrow Quantum.

IOP Publishing featured too with Antigoni Messaritaki talking about her journey from researcher to senior publisher and Physics World features and careers editor Tushna Commissariat taking part in a panel discussion on careers in quantum.

The importance of communication and other “soft skills” was emphasized by all speakers in the discussion, but what struck me most was a comment by Carrie Weidner, a lecturer in quantum engineering at Bristol, who underlined that it’s fine – in fact important – to learn to fail.

“If you’re resilient and can think critically, you can do anything,” said Weidner, who is also director of the quantum-engineering CDT. She warned too of the dangers of generative AI, joking that “every time you use ChatGPT, your brain is atrophying”.

Photo of Diya Nair

Another great talk was by Diya Nair, a computer-science undergraduate at the University of Birmingham, who is head of global outreach and UK ambassador for Girls in Quantum.

The organization is now active in almost 70 countries around the world, with the aim of “democratizing quantum education”. As Nair explained, Girls in Quantum does everything from arrange quantum computing courses and hackathons to creating its crowdfunded quantum-computing game called Hop.

The event also included a discussion about taking quantum research “from concept to commercialization”. It featured Jack Russel Bruce from Universal Quantum, Euan Allen from eye-imaging tech firm Siloton, Joe Longden from Duality Quantum Photonics, and Stewart Noakes, who has mentored numerous companies over the years.

Noakes emphasized that all hi-tech firms have three main needs: talent, money and ideas. In fact, as he explained, companies can sometimes suffer from having too much money as well as too little, especially if they grow too fast and hire people on big salaries who might then need to be let go if funding dries up.

Bruce, though, was positive about the overall state of the quantum-tech sector. “For me, the future is bright,” he said. But as all speakers underlined, if you want to join the industry, make sure you’ve got good communication skills, an open-minded attitude – and a willingness to learn on the go.

Metamaterial antennas enhance MR images of the eye and brain

In vivo MR imaging

MRI is one of the most important imaging tools employed in medical diagnostics. But for deep-lying tissues or complex anatomic features, MRI can struggle to create clear images in a reasonable scan time. A research team led by Thoralf Niendorf at the Max Delbrück Center in Germany is using metamaterials to create a compact radiofrequency (RF) antenna that enhances image quality and enables faster MRI scanning.

Imaging the subtle structures of the eye and orbit (the surrounding eye socket) is a particular challenge for MRI, due to the high spatial resolution and small fields-of-view required, which standard MRI systems struggle to achieve. These limitations are generally due to the antennas (or RF coils) that transmit and receive the RF signals. Increasing the sensitivity of these antennas will increase signal strength and improve the resolution of the resulting MR images.

To achieve this, Niendorf and colleagues turned to electromagnetic metamaterials – artificially manufactured, regularly arranged structures made of periodic subwavelength unit cells (UCs) that interact with electromagnetic waves in ways that natural materials do not. They designed the metamaterial UCs based on a double-square split-ring resonator design, tailored for operation at a high magnetic field strength of 7.0 T.

Metamaterials improve transmit–receive performance

In their latest study, led by doctoral student Nandita Saha and reported in Advanced Materials, the researchers created a metamaterial-integrated RF antenna (MTMA) by fabricating the UCs into a 5 x 8 array. They built two configurations: a planar antenna (planar-MTMA); and a version with a 90° bend in the centre (bend-MTMA) to conform to the human face. For comparison, they also built conventional counterparts without the metamaterial (planar-loop and bend-loop).

The researchers simulated the MRI performances of the four antennas and validated their findings via measurements at 7.0 T. Tests in a rectangular phantom showed that the planar-MTMA demonstrated between 14% and 20% higher transmit efficiency than the planar-loop (assessed via B₁+ mapping).

They next imaged a head phantom, placing planar antennas behind the head to image the occipital lobe (the part of the brain involved in visual processing) and bend antennas over the eyes for ocular imaging. For the planar antennas, B₁+ mapping revealed that the planar-MTMA generated around 21% (axial), 19% (sagittal) and 13% (coronal) higher intensity than the planar-loop. Gradient-echo imaging showed that planar-MTMA also improved the receive sensitivity, by 106% (axial), 94% (sagittal) and 132% (coronal).

Antenna design and deployment

The bend antennas exhibited similar trends, with B₁+ maps showing transmit gains of roughly 20% for the bend-MTMA over the bend-loop. The bend-MTMA also outperformed the bend-loop in terms of receive signal intensity, by approximately 30%.

“With the metamaterials we developed, we were able to guide and modulate the RF fields generated in MRI more efficiently,” says Niendorf. “By integrating metamaterials into MRI antennas, we created a new type of transmitter and detector hardware that increases signal strength from the target tissue, improves image sharpness and enables faster data acquisition.”

In vivo imaging

Importantly, the new MRI antenna design is compatible with existing MRI scanners, meaning that no new infrastructure is needed for use in the clinic. The researchers validated their technology in a group of volunteers, working closely with partners at Rostock University Medical Center.

Before use on human subjects, the researchers evaluated the MRI safety of the four antennas. All configurations remained well below the IEC’s specific absorption rate (SAR) limit. They also assessed the bend-MTMA (which showed the highest SAR) using MR thermometry and fibre optic sensors. After 30 min at 10 W input power, the temperature increased by about 1.5°C. At 5 W, the increase was below 0.5°C, well within IEC safety thresholds and thus used for the in vivo MRI exams.

The team first performed MRI of the eye and orbit in three healthy adults, using the bend-loop and bend-MTMA antennas positioned over the eyes. Across all volunteers, the bend-MTMA exhibited better transmit performance in the ocular region that the bend-loop.

The bend-MTMA antenna also generated larger intraocular signals than the bend-loop (assessed via T2-weighted turbo spin-echo imaging), with signal increases of 51%, 28% and 25% in the left eyes, for volunteers 1, 2 and 3, respectively, and corresponding gains of 27%, 26% and 29% for their right eyes. Overall, the bend-MTMA provided more uniform and higher-intensity signal coverage of the ocular region at 7.0 T than the bend-loop.

To further demonstrate clinical application of the bend-MTMA, the team used it to image a volunteer with a retinal haemangioma in their left eye. A 7.0 T MRI scan performed 16 days after treatment revealed two distinct clusters of structural change due to the therapy. In addition, one of the volunteer’s ocular scans revealed a sinus cyst, an unexpected finding that showed the diagnostic benefit of the bend-MTMA being able to image beyond the orbit and into the paranasal sinuses and inferior frontal lobe.

The team used the planar antennas to image the occipital lobe, a clinically relevant target for neuro-ophthalmic examinations. The planar-MTMA exhibited significantly higher transmit efficiency than the planar-loop, as well as higher signal intensity and wider coverage, enhancing the anatomical depiction of posterior brain regions.

“Clearer signals and better images could open new doors in diagnostic imaging,” says Niendorf. “Early ophthalmology applications could include diagnostic confirmation of ambiguous ophthalmoscopic findings, visualization and local staging of ocular masses, 3D MRI, fusion with colour Doppler ultrasound, and physio-metabolic imaging to probe iron concentration or water diffusion in the eye.”

He notes that with slight modifications, the new antennas could enable MRI scans depicting the release and transport of drugs within the body. Their geometry and design could also be tuned to image organs such as the heart, kidneys or brain. “Another pioneering clinical application involves thermal magnetic resonance, which adds a thermal intervention dimension to an MRI device and integrates diagnostic guidance, thermal treatment and therapy monitoring facilitated by metamaterial RF antenna arrays,” he tells Physics World.

Laser-written glass plates could store data for thousands of years

Humans are generating more data than ever before. While much of these data do not need to be stored long-term, some – such as scientific and historical records – would ideally still be retrievable in decades, or even centuries. The problem is that modern digital archive systems such as hard disk drives do not last that long. This means that data must regularly be transferred to new media, which is costly and time-consuming.

A team at Microsoft Research now claims to have found a solution. By using ultrashort, intense laser pulses to “write” data units called phase voxels into glass chips, the team says it has created a medium that could store 4.8 terabytes (TB) of data error-free for more than 10::000 years – a span that exceeds the age of history’s oldest surviving written records.

Direct laser writing

The idea of writing data into glass or other durable media with lasers is not new. Direct laser writing, as it is known, involves focusing high-power pulses, usually just femtoseconds (10-15 s) long, on a three-dimensional region within a medium. This modifies the medium’s optical properties in that region, and each modified region becomes a data-storage unit known as a voxel, which is the 3D equivalent of a pixel.

Because the laser’s energy is focused on a very small volume, the voxels created with this method can be very densely packed. Changing the amplitude and polarization of the laser’s output changes what information gets encoded at each voxel, and an optical microscope can “read out” this information by picking up changes in the light as it passes through each modified region. In terms of the media used, glass is particularly promising because it is thermally and chemically stable and is robust to moisture and electromagnetic interference.

Direct laser writing does have some limitations, however. In particular, encoding information generally requires multiple laser pulses per voxel, restricting the technique’s throughput and efficiency.

Two types of voxel, one laser pulse

Microsoft Research’s “Project Silica” team says it overcame this problem by encoding information in two types of voxel: phase voxels and birefringent voxels. Both types involve modifying the refractive index of the medium, and thus the speed of light within it. The difference is that whereas phase voxels create an isotropic change in the refractive index, birefringent voxels create an anisotropic change by rotating the voxel in the plane of the 120-mm square, 2-mm-thick glass chip.

Crucially, both types of voxel can be produced using a single laser pulse. According to Project Silica team leader Richard Black, this makes the modified region smaller and more uniform, minimizing effects such as light scattering that can interfere with read-outs from neighbouring voxels. It also allows many voxel layers to be written into, and then read out from, a single glass chip. The result is a system that can generate up to 10 million voxels per second, which equates to 25.6 million bits of data per second (Mbit s−1).

Performance of different types of glass

The Microsoft researchers studied two types of glass, both of which have better mechanical properties than ordinary window glass. In 301 layers of fused silica glass, they achieved a data density of 1.59 Gbit mm−3 using birefringent voxels, with a write throughput of 25.6 Mbit s−1 and a write efficiency of 10.1 nJ per bit. In 258 layers of borosilicate glass, the data density reached 0.678 Gbit mm−3 using phase voxels. Here, the write throughput was 18.4 Mbit s−1 and the write efficiency 8.85 nJ per bit.

“The phase voxel discovery in particular is quite notable because it lets us store data in ordinary borosilicate glass, rather than pure fused silica; do it with a single laser pulse per voxel; and do it highly parallel in close proximity,” says Black. “That combination of cheaper material and much simpler and faster writing and reading was a genuinely exciting moment for us.”

The researchers also showed that they could directly inscribe the glass using four independent laser beams in parallel, further increasing the write speeds for both types of glass.

Surviving “benign neglect”

To determine how long these inscribed glass plates could store data, the team repeatedly heated them to 500 °C, simulating their long-term ageing at lower temperatures. The results of these experiments suggest that encoded data could be retrieved after 10::000 years of storage at 290 °C. However, Black acknowledges that this figure does not account for external effects such as mechanical stress or chemical corrosion that could degrade the glass and the data it stores. Another unaddressed challenge is that storage capacity and writing speed will both need to grow before the technology can compete with today’s data centres.

If these deficiencies can be remedied, Black thinks the clearest potential applications would be in national libraries and other facilities that store scientific data and cultural records. “It’s also compelling for cloud archives where data is written once and kept indefinitely,” Black says. He points out that the team has already demonstrated proofs of concept with Warner Bros., the Global Music Vault and the Golden Record 2.0 project, a “cultural time capsule” inspired by the literal golden records launched on the Voyager spacecraft in the 1970s.

A common factor across all these organizations, Black explains, is that they need media that can survive “benign neglect” – something he says Project Silica delivers. He adds that the project also provides what he calls operational proportionality, meaning that its costs are primarily a function of the operations performed on the data, not the length of time the data are kept. “This completely alters the way we think about keeping archival material,” he says. “Once you have paid to keep the data, there is little point in deleting it, and you might as well keep it.”

Microsoft began exploring direct laser data storage in glass nearly a decade ago thanks to team member Ant Rowstron, who recognized the potential of work being done by physicist Peter Kazansky and colleagues at the University of Southampton, UK. The latest version of the technique, which is detailed in Nature, grew out of that collaboration, and Black says its capabilities are limited only by the power and speed of the femtosecond laser being used. “We have now concluded our research study and are sharing our results so that others may build on our work,” he says.

Ultrasound system solves the ‘unsticking problem’ in biomedical research

“Surround sound for biological cells,” is how Luke Cox describes the ultrasound technology that Impulsonics has developed to solve the “unsticking problem” in biomedical science. Cox is co-founder and chief executive of UK-based Impulsonics, which spun-out of the University of Bristol in 2023.

He is also my guest in this episode of the Physics World Weekly podcast. He explains why living cells grown in a petri dish tend to stick together, and why this can be a barrier to scientific research and the development of new medical treatments.

The system uses an array of ultrasound transducers to focus sound so that it frees-up and manipulates cells in a way that does not alter their biological properties. This is unlike chemical unsticking processes, which can change cells and impact research results.

We also chat about Cox’s career arc from PhD student to chief executive and explore opportunities for physicists in the biomedical industry.

The following articles are mentioned in the podcast:

Scientists are failing to disclose their use of AI despite journal mandates, finds study

An analysis of more than 5.2 million papers in 5000 different journals has revealed a dramatic rise in the use of artificial intelligence (AI) tools in academic writing across all scientific disciplines, especially physics.

However, the analysis has revealed a big gap between the number of researchers who use AI and those who admit to doing so – even though most scientific journals have policies requiring the use of AI to be disclosed.

Carried out by data scientist Yi Bu from Peking University and colleagues, the analysis looks at papers that are listed in the OpenAlex dataset and were published between 2021 and 2025.

To assess the impact of editorial guidelines introduced in response to the growing use of generative AI tools such as ChatGPT, they examined journal AI-writing policies, looked at author disclosures and used AI to see if papers had been written with the help of technology.

The AI detection analysis reveals that the use of AI writing tools has increased dramatically across all scientific disciplines since 2023. It also finds that 70% of journals have adopted AI policies, which primarily require authors to disclose the use of AI-writing tools.

IOP Publishing, which publishes Physics World, for example, has a journals policy that supports authors who use AI in a “responsible and appropriate” manner. It encourages authors, however, to be “transparent about their use of any generative AI tools in either the research or the drafting of the manuscript”.

A new framework

But in the new study, a full-text analysis of 75 000 papers published since 2023, reveals that only 76 articles (about 0.1% of the total) explicitly disclosed the use of AI writing tools.

In addition, the study finds no significant difference in the use of AI between journals that have disclosure policies and those that do not, which suggests that disclosure requirements are being ignored – what the authors call a “transparency gap”.

The study also finds that researchers from non-English-speaking countries are more likely to rely on AI writing tools than native English speakers. Increases in the use of AI writing tools are found to be particularly rapid in journals with high levels of open-access publishing.

The authors now call for a re-evaluation of ethical frameworks to foster responsible AI integration in science. They state that prohibition or disclosure requirements are insufficient to regulate AI use, with their results showing that researchers are not complying with policies.

The authors argue that instead of “opposition and resistance”, “proactive engagement and institutional innovation” is needed “to ensure AI technology truly enhances the value of science”.

The humanity of machines: the relationship between technology and our bodies

Humanity has had a complicated relationship with machines and technology for centuries. While we created these inventions to make our lives easier, and have become heavily reliant upon them, we have often feared their impact on society.

In her debut book, The Body Digital: a Brief History of Humans and Machines from Cuckoo Clocks to ChatGPT, Vanessa Chang tells the story of this symbiotic partnership, covering tools as diverse as the self-playing piano and generative AI products. The short book combines creative storytelling, an inward look at our bodies and interpersonal relationships, and a detailed history of invention. Chang – who is the director of programmes at Leonardo, the International Society for the Arts, Sciences, and Technology in California – offers us a framework for examining future worlds based on the relationship between humanity and machines.

“Technology” has no easy definition. The Body Digital therefore takes a broad approach, looking at software, machines, infrastructure and tools. Chang examines objects as mundane as the pen and as complex as the road networks that define our cities. She focuses on the interplay between machine and human: how tools have lightened our load and become embedded in our behaviour. In doing this she asks the reader: is it possible for the human body to extract itself from technology?

Each chapter of the book centres on a different part of the human anatomy – hand, voice, ear, eye, foot, body and mind – looking at the historical relationship between that body part and technology. Chang follows this thread through to the modern day and the large-scale impact these technologies have had on the development of our communities, communications and social structures. The chapters are a vehicle for Chang to present interesting pieces of history and discussions about society and culture. Her explanations are tightly knit, and the book covers huge ground in its relatively concise page count.

Chang avoids “doomerism”, remaining even-handed about our reservations towards technological advancement. She is careful in her discussion of new technology, particularly those that are often fraught in the public discourse, such as the use of generative AI in creating art, and the potential harms of facial-recognition software.

She includes genuine concerns – like biases creeping into training data for large language models – but mitigates these fears by discussing how technologies have become enmeshed in human culture through history. Our fear of some technologies has been unfounded – take, for example, the idea that the self-playing piano would supersede live piano concerts. These debates, Chang argues, have happened throughout the history of technology, and some of the same arguments from the past can easily be applied to future technology.

While this commentary is often thought-provoking, it sometimes doesn’t go as far as it might. There is relatively limited discussion throughout the book about the technological ecosystem we currently live in and how that might impact our level of optimism about the future. In particular, the topics of human labour being supplanted by machine labour, and the impacts of tech monoliths like Apple and Google, are relatively minimal.

In one example, Chang discusses the ways in which “telecommunication technologies might serve as channels into the afterlife”, allowing us to use technology to artificially recreate the voices of our loved ones after death. While the book contains a full discussion of how uncanny and alarming this type of “artistic necrophilia” might be, Chang tempers fear by pointing out that by being careful with our data, careful with our digital selves, we might be able to “mitigate the transformation of [our] voices into pure commodities”. However, the questions of who controls our data, the relationship between data and capital, and the level of control that we have over the use of our data, is somewhat limited.

Poetic technology

The difference between offering interesting ideas and overexplaining is a hard needle to thread, and one that Chang navigates successfully. One striking feature of The Body Digital is the quality of the prose. Chang has a background in fiction writing and her descriptions reflect this. An automaton is anthropomorphized as a “petite, barefoot boy” with a “cloud of brown hair”; and the humble footpath is described as “veer[ing] at a jaunty angle from the pavement, an unruly alternative to concrete”. As a consequence, her ideas are interesting and memorable, making the book readable and often moving.

Particularly impressive is Chang’s attitude to exposition, which mimics fiction’s age-old adage of “show, don’t tell”. She gives the reader enough information to learn something new in context and ask follow-up questions, without banging the reader over the head with an answer to these questions. The book mimics the same relationship between the written word and human consciousness that Chang discusses within it. The Body Digital marinates with the reader in the way any good novel might, while teaching them something new.

The result is a poetic and well-observed text, which offers the reader a different way of understanding humanity’s relationship with technology. It reminds us that we have coexisted with machines throughout the history of our species, and that they have been helpful and positively shaped the direction of our world. While she covers too much ground to gaze in any one direction for too long, the reader is likely to come away enriched and perhaps even hopeful. And, as Chang points out, we have the opportunity to shape the future of technology, by “attending to the rich, idiosyncratic intelligence of our bodies”.

  • 2025 Melville House Publishing 256pp £14.99 pb / £9.49 ebook

Making multipartite entanglement easier to detect

Genuine multipartite entanglement is the strongest form of entanglement, where every part of a quantum system is entangled with every other part. It plays a central role in advanced quantum tasks such as quantum metrology and quantum error correction. To detect this deep form of entanglement in practice, researchers often use entanglement witnesses which are fast, experimentally friendly tests that certify entanglement whenever a measurable quantity exceeds a certain bound.

In this work, the researchers significantly extend previous witness‑construction methods to cover a much broader family of multipartite quantum states. Their approach is built within the multi‑qudit stabiliser formalism, a powerful framework widely used in quantum error correction and known for describing large classes of entangled states, both pure and mixed. They generalise earlier results in two major directions: (i) to systems with arbitrary prime local dimension, going far beyond qubits, and (ii) to stabiliser subspaces, where the stabiliser defines not just a single state but an entire entangled subspace.

This generalisation allows them to construct witnesses tailored to high‑dimensional graph states and to stabiliser‑defined subspaces, and they show that these witnesses can be more robust to noise than those designed for multiqubit systems. In particular, witnesses tailored to GHZ‑type states achieve the strongest resistance to white noise, and in some cases the authors identify the most noise‑robust witness possible within this construction. They also demonstrate that stabiliser‑subspace witnesses can outperform graph‑state witnesses when the local dimension is greater than two.

Overall, this research provides more powerful and flexible tools for detecting genuine multipartite entanglement in noisy, high‑dimensional and computationally relevant quantum systems. It strengthens our ability to certify complex entanglement in real‑world quantum technologies and opens the door to future extensions beyond the stabiliser framework.

Read the full article

Entanglement witnesses for stabilizer states and subspaces beyond qubits

Jakub Szczepaniak et al 2025 Rep. Prog. Phys. 88 117602

Do you want to learn more about this topic?

Focus on Quantum Entanglement: State of the Art and Open Questions guest edited by Anna Sanpera and Carlo Marconi (2025-2026)

Resolving the spin of sound

Acoustic waves are usually thought of as purely longitudinal, moving back and forth in the direction the wave is travelling and having no intrinsic rotation, therefore no spin (spin‑0). Recent work has shown that acoustic waves can in fact carry local spin‑like behaviour. However, until now, the total spin angular momentum of an acoustic field was believed to vanish, with the local positive and negative spin contributions cancelling each other to give an overall global spin‑0. In this work, the researchers show that acoustic vortex beams can carry a non‑zero longitudinal spin angular momentum when the beam is guided by certain boundary conditions. This overturns the long‑held assumption that longitudinal waves cannot possess a global spin degree of freedom.

Using a self‑consistent theoretical framework, the researchers derive the full spin, orbital and total angular momentum of these beams and reveal a new kind of spin–orbit interaction that appears when the beam is compressed or expanded. They also uncover a detailed relationship between the two competing descriptions of angular momentum in acoustics which are canonical‑Minkowski and kinetic‑Abraham. They demonstrate that only the canonical‑Minkowski form is truly conserved and directly tied to the beam’s azimuthal quantum number, which describes how the wave twists as it travels.

The team further demonstrates this mechanism experimentally using a waveguide with a slowly varying cross‑section. They show that the effect is not limited to this setup, it can also arise in evanescent acoustic fields and even in other wave systems such as electromagnetism. These results introduce a missing fundamental degree of freedom in longitudinal waves, offer new strategies for manipulating acoustic spin and orbital angular momentum, and open the door to future applications in wave‑based devices, underwater communication and particle manipulation.

Read the full article

Longitudinal acoustic spin and global spin–orbit interaction in vortex beams

Wei Wang et al 2025 Rep. Prog. Phys. 88 110501

Do you want to learn more about this topic?

Acoustic manipulation of multi-body structures and dynamics by Melody X LimBryan VanSaders and Heinrich M Jaeger (2024)

Quantum memories could help make long-baseline optical astronomy a reality

Quantum-entangled sensors placed over a kilometre apart could allow interferometric measurements of optical light with single photon sensitivity, experiments in the US suggest. While this proof-of-principle demonstration of a theoretical proposal first made in 2012 is not yet practically useful for astronomy, it marks a significant step forward in quantum sensing.

Radio telescopes are often linked together to provide more detailed images with better angular resolution than would otherwise be possible. The Event Horizon Telescope array, for example, performs very long baseline interferometry of signals from observatories on four continents to take astrophysical images such as the first picture of a black hole in 2019. At shorter wavelengths, however, much weaker signals are often parcelled into higher-energy photons. “You start getting this granularity at the single photon level,” says Pieter-Jan Stas at Harvard University.

According to textbook quantum mechanics, one can create an interferometric image from single photons by recombining their paths at a single detector – provided that their paths are not measured before then. This principle is used in laboratory spectroscopy. In astronomical observations, however, attempting to transport single photons from widely spread telescopes to a central detector would almost certainly result in them being lost. The baseline of infrared and optical telescopes is therefore restricted to about 300 m.

In 2012, theorist Daniel Gottesman, then at the Perimeter Institute for Theoretical Physics in Canada, and colleagues proposed using a central single source of entangled photons as a quantum repeater to generate entanglement between two detection sites, putting them into the same quantum state. The effect of an incoming photon on this combined state could therefore be measured without having to recombine the paths and collect the photon at a central detector.

Hidden information

“In reality, the photon will be in a superposition of arriving at both of the detectors,” says Stas. “That’s where this advantage comes from – you have this photon that is delocalized and arrives at both the left and the right station – so you truly have this baseline that helps you with improving your resolution, but to do this you have to keep the ‘which path’ information hidden.”

The 2012 proposal was not thought to be practical, because it required distributing entanglement at a rate comparable with the telescope’s spectral bandwidth. In 2019, however, Harvard’s Mikail Lukin and colleagues proposed integrating a quantum memory into the system. In the new research, they demonstrate this in practice.

The team used qubits made from silicon–vacancy centres in diamond. These can be very long lived because the spin of the centre’s electron (which interacts with the photon) is mapped to the nuclear spin, which is very stable. The researchers used a central laser as a coherent photon source to generate heralded entanglement to certify that the qubits were event-ready. “It’s not like you have to receive the space signal to be simultaneous with the arrival of the photon,” says team member Aziza Suleymanzade at the University of California, Berkeley. “In our case, we distribute entanglement, and it has some coherence time, and during that time you can detect your signal.”

Using two detectors placed in adjacent laboratories and synthetic light sources, the researchers demonstrated photon detection above vacuum fluctuations in fibres over 1.5 km in length. They acknowledge that much work remains before this can be viable in practical astronomy, such as a higher rate of entanglement generation, but Stas says that “this is one step towards bringing quantum techniques into sensing”.

Similar work in China

The research is described in Nature. Researchers in China led by Jian-Wei Pan have achieved a similar result, but their work has yet to be peer reviewed.

Yujie Zhang of the University of Waterloo in Canada points out that Lukin and colleagues have done similar work on distributed quantum communication and the quantum internet. “The major difference is that for most of the original protocols, what people care about is trying to entangle different quantum memories in the quantum network so then they can do gates on those quantum memories,” he says. “There’s nothing about extra information from the environment…This one is different in that they have to get the information mapped from the starlight to their quantum memory.” He notes several difficulties acknowledged by the researchers – such as that vacancy centres are very narrowband, but says that now people know the system can work, they can work to show that it can beat classical systems in practice.

“I think this is definitely a step towards [realizing the protocol envisaged in 2012],” says Gottesman, now at the University of Maryland, College Park. “There have been previous experiments where they generated the entanglement and they did some interference but they didn’t have the repeater aspect, which is the real value-added aspect of doing quantum-assisted interferometry. Its rate is still well short of what you’d need to have a functioning telescope, but this is putting one of the important pieces into place.”

Copyright © 2026 by IOP Publishing Ltd and individual contributors