The particle physics community is in the vanguard of a global effort to realize the potential of quantum computing hardware and software for all manner of hitherto intractable research problems across the natural sciences. The end-game? A paradigm shift – dubbed “quantum advantage” – where calculations that are unattainable or extremely expensive on classical machines become possible, and practical, with quantum computers.
A case study in this regard is the Institute of High Energy Physics (IHEP), the largest basic science laboratory in China and part of the Chinese Academy of Sciences. Headquartered in Beijing, IHEP hosts a multidisciplinary scientific programme spanning elementary particle physics, astrophysics as well as the planning, design and construction of large-scale accelerator projects – among them the China Spallation Neutron Source, which launched in 2018, and the High Energy Photon Source, due to come online in 2025.
Quantum opportunity
Notwithstanding its ongoing investment in experimental infrastructure, IHEP is increasingly turning its attention to the application of quantum computing and quantum machine-learning technologies to accelerate research discovery. In short, exploring use-cases in theoretical and experimental particle physics where quantum approaches promise game-changing scientific breakthroughs. A core partner in this endeavour is Shandong University (SDU) Institute of Frontier and Interdisciplinary Science, home to another of China’s top-tier research programmes in high-energy physics (HEP).
With senior backing from Weidong Li and Xingtao Huang – physics professors at IHEP and SDU, respectively – the two laboratories began collaborating on the applications of quantum science and technology in summer 2022. This was followed by the establishment of a joint working group 12 months later. Operationally, the Quantum Computing for Simulation and Reconstruction (QC4SimRec) initiative comprises eight faculty members (drawn from both institutes) and is supported by a multidisciplinary team of two postdoctoral scientists and five PhD students.
Hideki Okawa “IHEP and SDU are creating a global player in the application of quantum computing and quantum machine-learning to HEP problems.” (Courtesy: IHEP)
“QC4SimRec is part of IHEP’s at-scale quantum computing effort, tapping into cutting-edge resource and capability from a network of academic and industry partners across China,” explains Hideki Okawa, a professor who heads up quantum applications research at IHEP (as well as co-chairing QC4SimRec alongside Teng Li, an associate professor in SDU’s Institute of Frontier and Interdisciplinary Science). “The partnership with SDU is a logical progression,” he adds, “building on a track-record of successful collaboration between the two centres in areas like high-performance computing, offline software and machine-learning applications for a variety of HEP experiments.”
Right now, Okawa, Teng Li and the QC4SimRec team are set on expanding the scope of their joint research activity. One principal line of enquiry focuses on detector simulation – i.e. simulating the particle shower development in the calorimeter, which is one of the most demanding tasks for the central processing unit (CPU) in collider experiments. Other early-stage applications include particle tracking, particle identification, and analysis of the fundamental physics of particle dynamics and collision.
“Working together in QC4SimRec,” explains Okawa, “IHEP and SDU are intent on creating a global player in the application of quantum computing and quantum machine-learning to HEP problems.”
Sustained scientific impact, of course, is contingent on recruiting the brightest and best talent in quantum hardware and software, with IHEP’s near-term focus directed towards engaging early-career scientists, whether from domestic or international institutions. “IHEP is very supportive in this regard,” adds Okawa, “and provides free Chinese language courses to fast-track the integration of international scientists. It also helps that our bi-weekly QC4SimRec working group meetings are held in English.”
A high-energy partnership
Around 700 km south-east of Beijing, the QC4SimRec research effort at SDU is overseen by Xingtao Huang, dean of the university’s Institute of Frontier and Interdisciplinary Science and an internationally recognized expert in machine-learning technologies and offline software for data processing and analysis in particle physics.
“There’s huge potential upside for quantum technologies in HEP,” he explains. In the next few years, for example, QC4SimRec will apply innovative quantum approaches to build on SDU’s pre-existing interdisciplinary collaborations with IHEP across a range of HEP initiatives – including the Beijing Spectrometer III (BESIII), the Jiangmen Underground Neutrino Observatory (JUNO) and the Circular Electron-Positron Collider (CEPC).
Big science, quantum advantage QC4SimRec will apply innovative quantum approaches to build on SDU’s pre-existing interdisciplinary collaborations with IHEP across a range of HEP initiatives, including the Jiangmen Underground Neutrino Observatory (above). (Courtesy: Yuexiang Liu)
One early-stage QC4SimRec project evaluated quantum machine-learning techniques for the identification and discrimination of muon and pion particles within the BESIII detector. Comparison with traditional machine-learning approaches shows equivalent performance on the same datasets and, by extension, the feasibility of applying quantum machine-learning to data analysis in next-generation collider experiments.
“This is a significant result,” explains Huang, “not least because particle identification – the identification of charged-particle species in the detector – is one of the biggest challenges in HEP experiments.”
Xingtao Huang “Our long-term goal is to establish a joint national laboratory with dedicated quantum computing facilities across both campuses.” (Courtesy: SDU)
Huang is currently seeking to recruit senior-level scientists with quantum and HEP expertise from Europe and North America, building on a well-established faculty team of 48 staff members (32 of them full professors) working on HEP. “We have several open faculty positions at SDU in quantum computing and quantum machine-learning,” he notes. “We’re also interested in recruiting talented postdoctoral researchers with quantum know-how.”
As a signal of intent, and to raise awareness of SDU’s global ambitions in quantum science and technology, Huang and colleagues hosted a three-day workshop (co-chaired by IHEP) last summer to promote the applications of quantum computing and classical/quantum machine-learning in particle physics. With over 100 attendees and speakers attending the inaugural event, including several prominent international participants, a successful follow-on workshop was held in Changchun earlier this year, with planning well under way for the next instalment in 2025.
Along a related coordinate, SDU has launched a series of online tutorials to support aspiring Masters and PhD students keen to further their studies in the applications of quantum computing and quantum machine-learning within HEP.
“Quantum computing is a hot topic, but there’s still a relatively small community of scientists and engineers working on HEP applications,” concludes Huang. “Working together, IHEP and SDU are building the interdisciplinary capacity in quantum science and technology to accelerate frontier research in particle physics. Our long-term goal is to establish a joint national laboratory with dedicated quantum computing facilities across both campuses.”
One thing is clear: the QC4SimRec collaboration offers ambitious quantum scientists a unique opportunity to progress alongside China’s burgeoning quantum ecosystem – an industry, moreover, that’s being heavily backed by sustained public and private investment. “For researchers who want to be at the cutting edge in quantum science and HEP, China is as good a place as any,” Okawa concludes.
For further information about QC4SimRec opportunities, please contact Hideki Okawa at IHEP or Xingtao Huang at SDU.
Quantum machine-learning for accelerated discovery
To understand the potential for quantum advantage in specific HEP contexts, QC4SimRec scientists are currently working on “rediscovering” the exotic particle Zc(3900) using quantum machine-learning techniques.
In terms of the back-story: Zc(3900) is an exotic subatomic particle made up of quarks (the building blocks of protons and neutrons) and believed to be the first tetraquark state observed experimentally – an observation that, in the process, deepened our understanding of quantum chromodynamics (QCD). The particle was discovered in 2013 using the BESIII detector at the Beijing Electron-Positron Collider (BEPCII), with independent observation by the Belle experiment at Japan’s KEK particle physics laboratory.
As part of their study, the IHEP- SDU team deployed the so-called Quantum Support Vector Machine algorithm (a quantum variant of a classical algorithm) for the training along with simulated signals of Zc(3900) and randomly selected events from the real BESIII data as backgrounds.
Using the quantum machine-learning approach, the performance is competitive versus classical machine-learning systems – though, crucially, with a smaller training dataset and fewer data features. Investigations are ongoing to demonstrate enhanced signal sensitivity with quantum computing – work that could ultimately point the way to the discovery of new exotic particles in future experiments.
Optical traps and tweezers can be used to capture and manipulate particles using non-contact forces. A focused beam of light allows precise control over the position of and force applied to an object, at the micron scale or below, enabling particles to be pulled and captured by the beam.
Optical manipulation techniques are garnering increased interest for biological applications. Researchers from Massachusetts Institute of Technology (MIT) have now developed a miniature, chip-based optical trap that acts as a “tractor beam” for studying DNA, classifying cells and investigating disease mechanisms. The device – which is small enough to fit in your hand – is made from a silicon-photonics chip and can manipulate particles up to 5 mm away from the chip surface, while maintaining a sterile environment for cells.
The promise of integrated optical tweezers
Integrated optical trapping provides a compact route to accessible optical manipulation compared with bulk optical tweezers, and has already been demonstrated using planar waveguides, optical resonators and plasmonic devices. However, many such tweezers can only trap particles directly on (or within several microns of) the chip’s surface and only offer passive trapping.
To make optical traps sterile for cell research, 150-µm thick glass coverslips are required. However, the short focal heights of many integrated optical tweezers means that the light beams can’t penetrate into standard sample chambers. Because such devices can only trap particles a few microns above the chip, they are incompatible with biological research that requires particles and cells to be trapped at much larger distances from the chip’s surface.
With current approaches, the only way to overcome this is to remove the cells and place them on the surface of the chip itself. This process contaminates the chip, however, meaning that each chip must be discarded after use and a new chip used for every experiment.
Trapping device for biological particles
Lead author Tal Sneh and colleagues developed an integrated optical phased array (OPA) that can focus emitted light at a specific point in the radiative near field of the chip. To date, many OPA devices have been motivated by LiDAR and optical communications applications, so their capabilities were limited to steering light beams in the far field using linear phase gradients. However, this approach does not generate the tightly focused beam required for optical trapping.
In their new approach, the MIT researchers used semiconductor manufacturing processes to fabricate a series of micro-antennas onto the chip. By creating specific phase patterns for each antenna, the researchers found that they could generate a tightly focused beam of light.
Each antenna’s optical signal was also tightly controlled by varying the input laser wavelength to provide an active spatial tuning for tweezing particles. The focused light beam emitted by the chip could therefore be shaped and steered to capture particles located millimetres above the surface of the chip, making it suitable for biological studies.
The researchers used the OPA tweezers to optically steer and non-mechanically trap polystyrene microparticles at up to 5 mm above the chip’s surface. They also demonstrated stretching of mouse lymphoblast cells, in the first known cell experiment to use single-beam integrated optical tweezers.
The researchers point out that this is the first demonstration of trapping particles over millimetre ranges, with the operating distance of the new device orders of magnitude greater than other integrated optical tweezers. Plasmonic, waveguide and resonator tweezers, for example, can only operate at 1 µm above the surface, while microlens-based tweezers have been able to operate at 20 µm distances.
Importantly, the device is completely reusable and biocompatible, because the biological samples can be trapped and undergo manipulation while remaining within a sterile coverslip. This ensures that both the biological media and the chip stay free from contamination without needing complex microfluidics packaging.
The work in this study provides a new type of modality for integrated optical tweezers, expanding their use into the biological domain to perform experiments on proteins and DNA, for example, as well as to sort and manipulate cells.
The researchers say that they hope to build on this research by creating a device with an adjustable focal height for the light beam, as well as introduce multiple trap sites to manipulate biological particles in more complex ways and employ the device to examine more biological systems.
Machine learning these days has a huge influence in physics, where it’s used in everything from the very practical (designing new circuits for quantum optics experiments) to the esoteric (finding new symmetries in data from the Large Hadron Collider). But it would be wrong to think that machine learning itself isn’t physics or that the Nobel committee – in honouring John Hopfield and Geoffrey Hinton – has been misguidedly seduced by some kind of “AI hype”.
Hopfield, 91, is a fully fledged condensed-matter physicist, who in the 1970s began to study the dynamics of biochemical reactions and its applications in neuroscience. In particular, he showed that the physics of spin glasses can be used to build networks of neurons to store and retrieve information. Hopfield applied his work to the problem of “associative memories” – how hearing a fragment of a song, say, can unlock a memory of the occasion we first heard it.
His work on the statistical physics and training of these “Hopfield networks” – and Hinton’s later on “Boltzmann machines” – paved the way for modern-day AI. Indeed, Hinton, a computer scientist, is often dubbed “the godfather of AI”. On the Physics World Weekly podcast, Anil Ananthaswamy – author of Why Machines Learn: the Elegant Maths Behind Modern AI – said Hinton’s contributions to AI were “immense”.
Of course, machine learning and AI are multidisciplinary endeavours, drawing on not just physics and mathematics, but neuroscience, computer science and cognitive science too. Imagine though, if Hinton and Hopfield had been given, say, a medicine Nobel prize. We’d have physicists moaning they’d been overlooked. Some might even say that this year’s Nobel Prize for Chemistry, which went to the application of AI to protein-folding, is really physics at heart.
We’re still in the early days for AI, which has its dangers. Indeed, Hinton quit Google last year so he could more freely express his concerns. But as this year’s Nobel prize makes clear, physics isn’t just drawing on machine learning and AI – it paved the way for these fields too.
In a new study, an international team of physicists has unified two distinct descriptions of atomic nuclei, taking a major step forward in our understanding of nuclear structure and strong interactions. For the first time, the particle physics perspective – where nuclei are seen as made up of quarks and gluons – has been combined with the traditional nuclear physics view that treats nuclei as collections of interacting nucleons (protons and neutrons). This innovative hybrid approach provides fresh insights into short-range correlated (SRC) nucleon pairs – which are fleeting interactions where two nucleons come exceptionally close and engage in strong interactions for mere femtoseconds. Although these interactions play a crucial role in the structure of nuclei, they have been notoriously difficult to describe theoretically.
“Nuclei (such as gold and lead) are not just a ‘bag of non-interacting protons and neutrons’,” explains Fredrick Olness at Southern Methodist University in the US, who is part of the international team. “When we put 208 protons and neutrons together to make a lead nucleus, they interact via the strong interaction force with their nearest neighbours; specifically, those neighbours within a ‘short range.’ These short-range interactions/correlations modify the composition of the nucleus and are a manifestation of the strong interaction force. An improved understanding of these correlations can provide new insights into both the properties of nuclei and the strong interaction force.”
To investigate the inner structure of atomic nuclei, physicists use parton distribution functions (PDFs). These functions describe how the momentum and energy of quarks and gluons are distributed within protons, neutrons, or entire nuclei. PDFs are typically obtained from high-energy experiments, such as those performed at particle accelerators, where nucleons or nuclei collide at close to the speed of light. By analysing the behaviour of the particles produced in these collisions, physicists can gain essential insights into their properties, revealing the complex dynamics of the strong interaction.
Traditional focus
However, traditional nuclear physics often focuses on the interactions between protons and neutrons within the nucleus, without delving into the quark and gluon structure of nucleons. Until now, these two approaches – one based on fundamental particles and the other on nuclear dynamics — remained separate. Now researchers in the US, Germany, Poland, Finland, Australia, Israel and France have bridged this gap.
The team developed a unified framework that integrates both the partonic structure of nucleons and the interactions between nucleons in atomic nuclei. This approach is particularly useful for studying SRC nucleon pairs, whose interactions have long been recognized as crucial to understanding the structure of nuclei, but they have been notoriously difficult to describe using conventional theoretical models.
By combining particle and nuclear physics descriptions, the researchers were able to derive PDFs for SRC pairs, providing a detailed understanding of how quarks and gluons behave within these pairs.
“This framework allows us to make direct relations between the quark–gluon and the proton–neutron description of nuclei,” said Olness. “Thus, for the first time, we can begin to relate the general properties of nuclei (such as ‘magic number’ nuclei – those with a specific number of protons or neutrons that make them particularly stable – or ‘mirror nuclei’ with equal numbers of protons and neutrons) to the characteristics of the quarks and gluons inside the nuclei.”
Experimental data
The researchers applied their model to experimental data from scattering experiments involving 19 different nuclei, ranging from helium-3 (with two protons and one neutron) to lead-208 (with 208 protons and neutrons). By comparing their predictions with the experimental data, they were able to refine their model and confirm its accuracy.
The results showed a remarkable agreement between the theoretical predictions and the data, particularly when it came to estimating the fraction of nucleons that form SRC pairs. In light nuclei, such as helium, nucleons rarely form SRC pairs. However, in heavier nuclei like lead, nearly half of the nucleons participate in SRC pairs, highlighting the significant role these interactions play in shaping the structure of larger nuclei.
These findings not only validate the team’s approach but also open up new avenues for research.
“We can study what other nuclear characteristics might yield modifications of the short-ranged correlated pairs ratios,” explains Olness. “This connects us to the shell model of the nucleus and other theoretical nuclear models. With the new relations provided by our framework, we can directly relate elemental quantities described by nuclear physics to the fundamental quarks and gluons as governed by the strong interaction force.”
The new model can be further tested using data from future experiments, such as those planned at the Jefferson Lab and at the Electron–Ion Collider at Brookhaven National Laboratory. These facilities will allow scientists to probe quark–gluon dynamics within nuclei with even greater precision, providing an opportunity to validate the predictions made in this study.
Tie-dye, geopolitical tension and a digitized Abba back on stage. Our appetite for revisiting the 1970s shows no signs of waning. Science writer Ferris Jabr has now reanimated another idea that captured the era’s zeitgeist: the concept of a “living Earth”. In Becoming Earth: How Our Planet Came to Life Jabr makes the case that our planet is far more than a lump of rock that passively hosts complex life. Instead, he argues that the Earth and life have co-evolved over geological time and that appreciating these synchronies can help us to steer away from environmental breakdown.
“We, and all living things, are more than inhabitants of Earth – we are Earth, an outgrowth of its structure and an engine of its evolution.” If that sounds like something you might hear in the early hours at a stone circle gathering, don’t worry. Jabr fleshes out his case with the latest science and journalistic flair in what is an impressive debut from the Oregon-based writer.
Becoming Earth is a reappraisal of the Gaia hypothesis, proposed in 1972 by British scientist James Lovelock and co-developed over several decades by US microbiologist Lynn Margulis. This idea of the Earth functioning as a self-regulating living organism has faced scepticism over the years, with many feeling it is untestable and strays into the realm of pseudoscience. In a 1988 essay, the biologist and science historian Stephen Jay Gould called Gaia “a metaphor, not a mechanism”.
Though undoubtedly a prodigious intellect, Lovelock was not your typical academic. He worked independently across fields including medical research, inventing the electron capture detector and consulting for petrochemical giant Shell. Add that to Gaia’s hippyish name – evoking the Greek goddess of Earth – and it’s easy to see why the theory faced a branding issue within mainstream science. Lovelock himself acknowledged errors in the theory’s original wording, which implied the biosphere acted with intention.
Though he makes due reference to the Gaia hypothesis, Jabr’s book is a standalone work, and in revisiting the concept in 2024, he has one significant advantage: we now have a tonne of scientific evidence for tight coupling between life and the environment. For instance, microbiologists increasingly speak of soil as a living organism because of the interconnections between micro-organisms and soil’s structure and function. Physicists meanwhile happily speak of “complex systems” where collective behaviour emerges from interactions of numerous components – climate being the obvious example.
To simplify this sprawling topic, Becoming Earth is structured into three parts: Rock, Water and Air. Accessible scientific discussions are interspersed with reportage, based on Jabr’s visits to various research sites. We kick off at the Sanford Underground Research Facility in South Dakota (also home to neutrino experiments) as Jabr descends 1500 m in search of iron-loving microbes. We learn that perhaps 90% of all microbes live deep underground and they transform Earth wherever they appear, carving vast caverns and regulating the global cycling of carbon and nutrients. Crucially, microbes also created the conditions for complex life by oxygenating the atmosphere.
In the Air section, Jabr scales the 1500 narrow steps of the Amazon Tall Tower Observatory to observe the forest making its own rain. Plants are constantly releasing water into the air through their leaves, and this drives more than half of the 20 billion tonnes of rain that fall on its canopy daily – more than the volume discharged by the Amazon river. “It’s not that Earth is a single living organism in exactly the same way as a bird or bacterium, or even a superorganism akin to an ant colony,” explains Jabr. “Rather that the planet is the largest known living system – the confluence of all other ecosystems – with structures, rhythms, and self-regulating processes that resemble those of its smaller constituent life forms. Life rhymes at every scale.”
When it comes to life’s capacity to alter its environment, not all creatures are born equal. Humans are having a supersized influence on these planetary rhythms despite appearing in recent geological history. Jabr suggests the Anthropocene – a proposed epoch defined by humanity’s influence on the planet – may have started between 50,000 and 10,000 years ago. At that time, our ancestors hunted mammoths and other megafauna into extinction, altering grassland habitats that had preserved a relatively cool climate.
Some of the most powerful passages in Becoming Earth concern our relationship with hydrocarbons. “Fossil fuel is essentially an ecosystem in an urn,” writes Jabr to illustrate why coal and oil store such vast amounts of energy. Elsewhere, on a beach in Hawaii an earth scientist and artist scoop up “plastiglomerates” – rocks formed from the eroded remains of plastic pollution fused with natural sediments. Humans have “forged a material that had never existed before”.
A criticism of the original Gaia hypothesis is that its association with a self-regulating planet may have fuelled a type of climate denialism. Science historian Leah Aronowsky argued that Gaia created the conditions for people to deny humans’ unique capacity to tip the system.
Jabr doesn’t see it that way and is deeply concerned that we are hastening the end of a stable period for life on Earth. But he also suggests we have the tools to mitigate the worst impacts, though this will likely require far more than just cutting emissions. He visits the Orca project in Iceland, the world’s first and largest plant for removing carbon from the atmosphere and storing it over long periods – in this case injecting it into basalt deep below the surface.
In an epilogue, we finally meet a 100-year-old James Lovelock at his Dorset home three years before his death in 2022. Still cheerful and articulate, Lovelock thrived on humour and tackling the big questions. As pointed out by Jabr, Lovelock was also prone to contradiction and the occasional alarmist statement. For instance, in his 2006 book The Revenge of Gaia he claimed that the only few breeding humans left by the end of the century would be confined to the Arctic. Fingers crossed he’s wrong on that one!
Perhaps Lovelock was prone to the same phenomenon we see in quantum physics where even the sharpest scientific minds can end up shrouding the research in hype and woo. Once you strip away the new-ageyness, we may find that the idea of Gaia was never as “out there” as the cultural noise that surrounded it. Thanks to Jabr’s earnest approach, the living Earth concept is alive and kicking in 2024.
The US condensed-matter physicist Leon Cooper, who shared the 1972 Nobel Prize for Physics, has died at the age of 94. In the late 1950s, Cooper, together with his colleagues Robert Schrieffer and John Bardeen, developed a theory of superconductivity that could explain why certain materials undergo an absolute absence of electrical resistance at low temperatures.
Born on 28 February 1930 in New York City, US, Cooper graduated from the Bronx High School of Science in 1947 before earning a degree from Columbia University, which he completed in 1951, and then a PhD in 1954.
Cooper then spent time at the Institute for Advanced Study in Princeton, the University of Illinois and Ohio State University before heading to Brown University in 1958 where he remained for the rest of his career.
It was in Illinois that Cooper began to work on a theoretical explanation of superconductivity – a phenomenon that was first seen by the Dutch physicist Heike Kamerlingh Onnes when he discovered in 1911 that the electrical resistance of mercury suddenly disappeared beneath a temperature of 4.2 K.
However, there was no microscopic theory of superconductivity until 1957, when Bardeen, Cooper and Schrieffer – all based at Illinois – came up with their “BCS” theory. This described how an electron can deform the atomic lattice through which it moves, thereby pairing with a neighbouring electron, which became known as a Cooper pair. Being paired allows all the electrons in a superconductor to move as a single cohort, known as a condensate, prevailing over thermal fluctuations that could cause the pairs to break.
Bardeen, Cooper and Schrieffer published their BCS theory in April 1957 (Phys. Rev.106 162), which was then followed in December by a full-length paper (Phys. Rev. 108 1175). Cooper was in his late 20s when he made the breakthrough.
Not only did the BCS theory of superconductivity successfully account for the behaviour of “conventional” low-temperature superconductors such as mercury and tin but it also had application in particle physics by contributing to the notion of spontaneous symmetry breaking.
For their work the trio won the 1972 Nobel Prize for Physics “for their jointly developed theory of superconductivity, usually called the BCS-theory”.
From BCS to BCM
While Cooper continued to work in superconductivity, later in his career he turned to neuroscience. In 1973 he founded and directed Brown’s Institute for Brain and Neural Systems, which studied animal nervous systems and the human brain. In the 1980s he came up with a physical theory of learning in the visual cortex dubbed the “BCM” theory, named after Cooper and his colleagues Elie Bienenstock and Paul Munro.
He also founded the technology firm Nestor along with Charles Elbaum, which aimed to find commercial and military applications for artificial neural networks.
As well as the Nobel prize, Cooper was awarded the Comstock Prize from the US National Academy of Sciences in 1968 and the Descartes Medal from the Academie de Paris in 1977.
He also wrote numerous books including An Introduction to the Meaning and Structure of Physics in 1968 and Physics: Structure and Meaning in 1992. More recently, he published Science and Human Experiencein 2014.
“Leon’s intellectual curiosity knew no boundaries,” notes Peter Bilderback, who worked with Cooper at Brown. “He was comfortable conversing on any subject, including art, which he loved greatly. He often compared the construction of physics to the building of a great cathedral, both beautiful human achievements accomplished by many hands over many years and perhaps never to be fully finished.”
When British physicist James Chadwick discovered the neutron in 1932, he supposedly said, “I am afraid neutrons will not be of any use to anyone.” The UK’s neutron user facility – the ISIS Neutron and Muon Source, now operated by the Science and Technology Facilities Council (STFC) – was opened 40 years ago. In that time, the facility has welcomed more than 60,000 scientists from around the world. ISIS supports a global community of neutron-scattering researchers, and the work that has been done there shows that Chadwick couldn’t have been more wrong.
By the time of Chadwick’s discovery, scientists knew that the atom was mostly empty space, and that it contained electrons and protons. However, there were some observations they couldn’t explain, such as the disparity between the mass and charge numbers of the helium nucleus.
The neutron was the missing piece of this puzzle. Chadwick’s work was fundamental to our understanding of the atom, but it also set the stage for a powerful new field of condensed-matter physics. Like other subatomic particles, neutrons have wave-like properties, and their wavelengths are comparable to the spacings between atoms. This means that when neutrons scatter off materials, they create characteristic interference patterns. In addition, because they are electrically neutral, neutrons can probe deeper into materials than X-rays or electrons.
Today, facilities like ISIS use neutron scattering to probe everything from spacecraft components and solar cells to studying how cosmic ray neutrons interact with electronics to ensure the resilience of technology for driverless cars and aircraft.
The origins of neutron scattering
On 2 December 1942 a group of scientists at the University of Chicago in the US, led by Enrico Fermi, watched the world’s first self-sustaining nuclear chain reaction, an event that would reshape world history and usher in a new era of atomic science.
One of those in attendance was Ernest O Wollan, a physicist with a background in X-ray scattering. The neutron’s wave-like properties had been established in 1936 and Wollan recognized that he could use neutrons produced by a nuclear reactor like the one in Chicago to determine the positions of atoms in a crystal. Wollan later moved to Oak Ridge National Laboratory (ORNL) in Tennessee, where a second reactor was being built, and at the end of 1944 his team was able to observe Bragg diffraction of neutrons in sodium chloride and gypsum salts.
A few years later Wollan was joined by Clifford Schull, with whom he refined the technique and constructed the world’s first purpose-built neutron-scattering instrument. Schull won the Nobel Prize for Physics in 1994 for his work (with Bertram Brockhouse, who had pioneered the use of neutron scattering to measure excitations), but Wollan was ineligible because he had died 10 years previously.
The early reactors used for neutron scattering were multipurpose, the first to be designed specifically to produce neutron beams was the High Flux Beam Reactor (HFBR) at Brookhaven National Laboratory in the US in 1965. This was closely followed in 1972 by the Institut Laue–Langevin (ILL) in France, a facility that is still running today.
Probing deep within The first target station at the ISIS Neutron and Muon Source. (Courtesy: STFC)
Rather than using a reactor, ISIS is based on an alternative technology called “spallation” that first emerged in the 1970s. In spallation, neutrons are produced by accelerating protons at a heavy metal target. The protons collide like bullets with the nuclei in the target, absorb the proton and then discharge high-energy particles, including neutrons.
The first such sources specifically designed for neutron scattering were the KENS source at the Institute of Materials Structure Science (IMSS) in Japan, which started operation in 1980, and the Intense Pulsed Neutron Source at the Argonne National Laboratory in the US, which started operation in 1981.
The pioneering development work on these sources and in other institutions was of great benefit during the design and development of what was to become ISIS. The facility was approved in 1977 and the first beam was produced on 16 December 1984. In October 1985 the source was formally named ISIS and opened by then UK prime minister Margaret Thatcher. Today around 20 reactor and spallation neutron sources are operational around the world and one – the European Spallation Source (ESS) – is under construction in Sweden.
The name ISIS was inspired by both the river that flows through Oxford and the Egyptian goddess of reincarnation. The relevance of the latter relates to the fact that ISIS was built on the site of the NIMROD proton synchrotron that operated between 1964 and 1978, reusing much of its infrastructure and components.
Producing neutrons and muons
At the heart of ISIS is an 800 MeV accelerator that produces intense pulses of protons 50 times a second. These pulses are then fired at two tungsten targets. Spallation of the tungsten by the proton beam produces neutrons that fly off in all directions.
Before the neutrons can be used, they must be slowed down, which is achieved by passing them through a material called a “moderator”. ISIS uses various moderators which operate at different temperatures, producing neutrons with varying wavelengths. This enables scientists to probe materials on length scales from fractions of an angstrom to hundreds of nanometres.
Arrayed around the two neutron sources and the moderators are more than 25 beamlines that direct neutrons to one of ISIS’s specialized experiments. Many of these perform neutron diffraction, which is used to study the structure of crystalline and amorphous solids, as well as liquids.
When neutrons scatter, they also transfer a small amount of energy to the material and can excite vibrational modes in atoms and molecules. ISIS has seven beamlines dedicated to measuring this energy transfer, a technique called neutron spectroscopy. This can tell us about atomic and molecular bonds and is also used to study properties like specific heat and resistivity, as well as magnetic interactions.
Neutrons have spin so they are also sensitive to the magnetic properties of materials. Neutron diffraction is used to investigate magnetic ordering such as ferrimagnetism whereas spectroscopy is suited to the study of collective magnetic excitations.
Neutrons can sense short and long-ranged magnetic ordering, but to understand localized effects with small magnetic moments, an alternative probe is needed. Since 1987, ISIS has also produced muon beams, which are used for this purpose, as well as other applications. In front of one of the neutron targets is a carbon foil and when the proton beam passes through this it produces pions, which rapidly decay into muons. Rather than scattering, muons become implanted in the material, where they rapidly decay into positrons. By analysing the decay positrons, scientists can study very weak and fluctuating magnetic fields in materials that may be inaccessible with neutrons. For this reason, muon and neutron techniques are often used together.
“The ISIS instrument suite now provides capability across a broad range of neutron and muon science,” says Roger Eccleston, ISIS director. “We’re constantly engaging our user community, providing feedback and consulting them on plans to develop ISIS. This continues as we begin our ‘Endeavour’ programme: the construction of four new instruments and five significant upgrades to deliver even more performance enhancements.
“ISIS has been a part of my career since I arrived as a placement student shortly before the inauguration. Although I have worked elsewhere, ISIS has always been part of my working life. I have seen many important scientific and technical developments and innovations that kept me inspired to keep coming back.”
Over the last 40 years, the samples studied at ISIS have become smaller and more complex, and measurements have become quicker. The kinetics of chemical reactions can be imaged in real-time, and extreme temperatures and pressures can be achieved. Early work from ISIS focused on physics and chemistry questions such as the properties of high-temperature superconductors, the structure of chemicals and the phase behaviour of water. More recent work includes “seeing” catalysis in real-time, studying biological systems such as bacterial membranes, and enhancing the reliability of circuits for driverless cars.
Understanding the building blocks of life
Unlike X-rays and electrons, neutrons scatter strongly from light nuclei including hydrogen, which means they can be used to study water and organic materials.
Water is the most ubiquitous liquid on the planet, but its molecular structure gives it complex chemical and physical properties. Significant work on the phase behaviour of water was performed at ISIS in the early 2000s by scientists from the UK and Italy, who showed that liquid water under pressure transitions between two distinct structures, one low density and one high density (Phys. Rev. Lett. 84 2881).
Skin deep An illustration of the model outer membrane of the bacterium used in ISIS experiments on the effect of “last resort” antibiotics. On the left, the intact bilayer model is shown below body temperature with the antibiotic (red) unable to penetrate. On the right is shown the effect of raising the temperature: the thermal motion of the molecules in the outer membrane allows the antibiotic to insert itself and disrupt the membrane’s structure. (Courtesy: Luke Clifton/STFC)
Water is the molecule of life, and as the technical capabilities of ISIS have advanced, it has become possible to study it inside cells, where it underpins vital functions from protein folding to chemical reactions. In 2023 a team from Portugal used the facilities at ISIS to investigate whether the water inside cells can be used as a biomarker for cancer.
Because it’s confined at the nanoscale, water in a cell will behave quite differently to bulk water. At these scales, water’s properties are highly sensitive to its environment, which changes when a cell becomes cancerous. The team showed that this can be measured with neutron spectroscopy, manifesting as an increased flexibility in the cancerous cells (Scientific Reports13 21079).
If light is incident on an interface between two materials with different refractive indices it may, if the angle is just right, be perfectly reflected. A similar effect is exhibited by neutrons that are directed at the surface of a material, and neutron reflectometry instruments at ISIS use this to measure the thickness, surface roughness, and chemical composition of thin films.
One recent application of this technique at ISIS was a 2018 project where a team from the UK studied the effect of a powerful “last resort” antibiotic on the outer membrane of a bacterium. This antibiotic is only effective at body temperature, and the researchers show that this is because the thermal motion of molecules in the outer membrane makes it easier for the antibiotic to slip in and disrupt the bacterium’s structure (PNAS115 E7587).
Exploring the quantum world
A year after ISIS became operational, physicists Georg Bednorz and Karl Alexander Müller, working at the IBM research laboratory in Switzerland, discovered superconductivity in a material at 35 K, 12 K higher than any other known superconductor at the time. This discovery would later win them the 1987 Nobel Prize for Physics.
High-temperature superconductivity was one of the most significant discoveries of the 1980s, and it was a focus of early work at ISIS. Another landmark came in 1987, when yttrium barium copper oxide (YBCO) was found to exhibit superconductivity above 77 K, meaning that instead of liquid helium, it can be cooled to a superconducting state with the much cheaper liquid nitrogen. The structure of this material was first fully characterized at ISIS by a team from the US and UK (Nature327 310).
In a spin In a quantum spin liquid (QSL) magnetic spins are unable to form an ordered ferromagnetic or antiferromagnetic state. In theory, this means that the spins continue to change direction even close to absolute zero, creating a highly entangled ensemble. (Courtesy: Francis Pratt/STFC)
Another example of the quantum systems studied at ISIS is quantum spin liquids (QSLs). Most magnetic materials form an ordered phase like a ferromagnet when cooled, but a QSL is an interacting system of electron spins that is, in theory, disordered even when cooled to absolute zero.
QSLs are of great interest today because they are theorized to exhibit long-range entanglement, which could be applied to quantum computing and communications. QSLs have proven challenging to identify experimentally, but evidence from neutron scattering and muon spectroscopy at ISIS has characterized spin-liquid states in a number of materials (Nature471612).
Developing sustainable solutions and new materials
Over the years, experimental set-ups at ISIS have evolved to handle increasingly extreme and complex conditions. Almost 20 years ago, high-pressure neutron experiments performed by a UK team at ISIS showed that surfactants could be designed to enhance the solubility of liquid carbon dioxide, potentially unlocking a vast array of applications in the food and pharmaceutical industries as an environmentally friendly alternative to traditional petrochemical solvents (Langmuir22 9832).
Today, further developments in sample environment, detector technology and data analysis software enable us to observe chemical processes in real time, with materials kept under conditions that closely mimic their actual use. Recently, neutron imaging was used by a team from the UK and Germany to monitor a catalyst used widely in the chemical industry to improve the efficiency of reactions (Chem. Commun. 59 12767). Few methods can observe what is happening during a reaction, but neutron imaging was able to visualize it in real time.
Another discovery made just after ISIS became operational was the chemical buckminsterfullerene or “buckyball”. Buckyballs are a molecular form of carbon that consists of 60 carbon atoms arranged in a spherical structure, resembling a football. The scientists who first synthesized this molecule were awarded the Nobel Prize for Chemistry in 1996, and in the years following this discovery, researchers have studied this form of carbon using a range of techniques, including neutron scattering.
Ensembles of buckyballs can form a crystalline solid, and in the early 1990s studies of crystalline buckminsterfullerene at ISIS revealed that, while adjacent molecules are oriented randomly at room temperature, they transition to an ordered structure below 249 K to minimize their energy (Nature353 147).
Bucking the trend Buckminsterfullerene, or “buckyballs”, is a spherical molecular form of carbon. Its synthesis in 1985 led to the discovery of a family of mesh-like carbon structures called fullerenes, which include carbon nanotubes. (Courtesy: Mohammed Aouane and Stephanie Richardson, STFC)
Four decades on, fullerenes (the family of materials that includes buckyballs) continue to present many research opportunities. Through a process known as “molecular surgery”, synthetic chemists can create an opening in the fullerene cage, enabling them to insert an atom, ion or molecular cluster. Neutron-scattering studies at ISIS were recently used to characterize helium atoms trapped inside buckyballs (Phys. Chem. Chem. Phys.25 20295). These endofullerenes are helping to improve our understanding of the quantum mechanics associated with confined particles and have potential applications ranging from photovoltaics to drug delivery.
Just as they shed light on materials of the future, neutrons and muons also offer a unique glimpse into the materials, methods and cultures of the past. At ISIS, the penetrative and non-destructive nature of neutrons and muons has been used to study many invaluable cultural heritage objects from ancient Egyptian lizard coffins (Sci. Rep. 13 4582) to Samurai helmets (Archaeol. Anthropol. Sci. 13 96), deepening our understanding of the past without damaging any of these precious artifacts.
Looking within, and to the future
If you want to understand how things structurally fail, you must get right inside and look, and the neutron’s ability to penetrate deep into materials allows engineers to do just that. ISIS’s Engin-X beamline measures the strain within a crystalline material by measuring the spacing between atomic lattice planes. This has been used by sectors including aerospace, oil and gas exploration, automotive, and renewable power.
Recently, ISIS has also been attracting electronics companies looking to use the facility to irradiate their chips with neutrons. This can mimic the high-energy neutrons generated in the atmosphere by cosmic rays, which can cause reliability problems in electronics. So, when you next fly, drive or surf the web, ISIS may just have had a hand in it.
Short circuit Electronic circuit boards being tested on ChipIr, the ISIS microelectronics irradiation instrument. Electronic components, particularly those in aircraft, are constantly bombarded by neutrons from the atmosphere, which can cause them to fail. The facilities at ISIS test chip reliability by mimicking these conditions. (Courtesy: STFC)
With its many discoveries and developments, ISIS has succeeded in proving Chadwick wrong over the past 40 years, and the facility is now setting its sights on the upcoming decades of neutron-scattering research. “While predicting the future of scientific research is challenging, we can anchor our activities around a couple of trends,” explains ISIS associate director Sean Langridge. “Our community will continue to pursue fundamental research for its intrinsic societal value by discovering, synthesizing and processing new materials. Furthermore, we will use the capabilities of neutrons to engineer and optimize a material’s functionality, for example, to increase operational lifetime and minimize environmental impact.”
The capability requirements will continue to become more complex and, as they do so, the amount of data produced will also increase. The extensive datasets produced at ISIS are well suited for machine-learning techniques. These can identify new phenomena that conventional methods might overlook, leading to the discovery of novel materials.
As ISIS celebrates its 40th anniversary of neutron production, the use of neutrons continues to provide huge value to the physics community. A feasibility and design study for a next-generation neutron and muon source is now under way. Despite four decades of neutrons proving their worth, there is still much to discover over the coming decades of UK neutron and muon science.
Physicists in Germany have used visible light to measure intramolecular distances smaller than 10 nm thanks to an advanced version of an optical fluorescence microscopy technique called MINFLUX. The technique, which has a precision of just 1 angstrom (0.1 nm), could be used to study biological processes such as interactions between proteins and other biomolecules inside cells.
In conventional microscopy, when two features of an object are separated by less than half the wavelength of the light used to image them, they will appear blurry and indistinguishable due to diffraction. Super-resolution microscopy techniques can, however, overcome this so-called Rayleigh limit by exciting individual fluorescent groups (fluorophores) on molecules while leaving neighbouring fluorophores alone, meaning they remain dark.
One such technique, known as nanoscopy with minimal photon fluxes, or MINFLUX, was invented by the physicist Stefan Hell. First reported in 2016 by Hell’s team at the Max Planck Institute (MPI) for Multidisciplinary Sciences in Göttingen, MINFLUX first “switches on” individual molecules, then determines their position by scanning a beam of light with a doughnut-shaped intensity profile across them.
The problem is that at distances of less than 5 to 10 nm, most fluorescent molecules start interacting with each other. This means they cannot emit fluorescence independently – a prerequisite for reliable distance measurements, explains Steffen Sahl, who works with Hell at the MPI.
Non-interacting fluorescent dye molecules
To overcome this problem, the team turned to a new type of fluorescent dye molecule developed in Hell’s research group. These molecules can be switched on in succession using UV light, but they do not interact with each other. This allows the researchers to mark the positions they want to measure with single fluorescent molecules and record their locations independently, to within as little as 0.1 nm, even when the dye molecules are close together.
“The localization process boils down to relating the unknown position of the fluorophore to the known position of the centre of the doughnut beam, where there is minimal or ideally zero excitation light intensity,” explains Hell. “The distance between the two can be inferred from the excitation (and hence the fluorescence) rate of the fluorophore.”
The advantage of MINFLUX, Hell tells Physics World, is that the closer the beam’s intensity minimum gets to the fluorescent molecule, the fewer fluorescence photons are needed to pinpoint the molecule’s location. This takes the burden of producing localizing photons – in effect, tiny lighthouses signalling “Here I am!” – away from the relatively weakly-emitting molecule and shifts it onto the laser beam, which has photons to spare. The overall effect is to reduce the required number of detected photons “typically by a factor of 100”, Hell says, adding that this translates into a 10-fold increase in localization precision compared to traditional camera-based techniques.
“A real alternative” to existing measurement methods
The researchers demonstrated their technique by precisely determining distances of 1–10 nanometres in polypeptides and proteins. To prove that they were indeed measuring distances smaller than the size of these molecules, they used molecules of a different substance, polyproline, as “rulers” of various lengths.
Polyproline is relatively stiff and was used for a similar purpose in early demonstrations of a method called Förster resonance energy transfer (FRET) that is now widely used in biophysics and molecular biology. However, FRET suffers from fundamental limitations on its accuracy, and Sahl thinks the “arguably surprising” 0.1 nm precision of MINFLUX makes it “a real alternative” for monitoring sub-10-nm distances.
While it had long been clear that MINFLUX should, in principle, be able to resolve distances at the < 5 nm scale and measure them to sub-nm precision, Hell notes that it had not been demonstrated at this scale until now. “Showing that the technique can do this is a milestone in its development and demonstration,” he says. “It is exciting to see that we can resolve fluorescence molecules that are so close together that they literally touch.” Being able to measure these distances with angstrom precision is, Hell adds, “astounding if your bear in mind that all this is done with freely propagating visible light focused by a conventional lens”.
“I find it particularly fascinating that we have now gone to the very size scale of biological molecules and can quantify distances even within them, gaining access to details of their conformation,” Sahl adds.
The researchers say that one of the key prerequisites for this work (and indeed all super-resolution microscopy developed to date) was the sequential ON/OFF switching of the fluorophores emitting fluorescence. Because any cross-talk between the two molecules would have been problematic, one of the main challenges was to identify fluorescence molecules with truly independent behaviour – that is, ones in which the silent (OFF-state) molecule did not affect its emitting (ON-state) neighbour and vice versa.
Looking forward, Hell says he and his colleagues are now looking to develop and establish MINFLUX as a standard tool for unravelling and quantifying the mechanics of proteins.
Adaptive radiotherapy – in which a patient’s treatment is regularly replanned throughout their course of therapy – can compensate for uncertainties and anatomical changes and improve the accuracy of radiation delivery. Now, a team at the Paul Scherrer Institute’s Center for Proton Therapy has performed the first clinical implementation of an online daily adaptive proton therapy (DAPT) workflow.
Proton therapy benefits from a well-defined Bragg peak range that enables highly targeted dose delivery to a tumour while minimizing dose to nearby healthy tissues. This precision, however, also makes proton delivery extremely sensitive to anatomical changes along the beam path – arising from variations in mucus, air, muscle or fat in the body – or changes in the tumour’s position and shape.
“For cancer patients who are irradiated with protons, even small changes can have significant effects on the optimal radiation dose,” says first author Francesca Albertini in a press statement.
Online plan adaptation, where the patient remains on the couch during the replanning process, could help address the uncertainties arising from anatomical changes. But while this technique is being introduced into photon-based radiotherapy, daily online adaptation has not yet been applied to proton treatments, where it could prove even more valuable.
To address this shortfall, Albertini and colleagues developed a three-phase DAPT workflow, describing the procedure in Physics in Medicine & Biology. In the pre-treatment phase, two independent plans are created from the patient’s planning CT: a “template plan” that acts as a reference for the online optimized plan, and a “fallback plan” that can be selected on any day as a back-up if necessary.
Next, the online phase involves acquiring a daily CT before each irradiation, while the patient is on the treatment couch. For this, the researchers use an in-room CT-on-rails with a low-dose protocol. They then perform a fully automated re-optimization of the treatment plan based on the daily CT image. If the adapted plan meets the required clinical goals and passes an automated quality assurance (QA) procedure, it is used to treat the patient. If not, the fallback plan is delivered instead.
Finally, in the offline phase, the delivered dose in each fraction is recalculated retrospectively from the log files using a Monte Carlo algorithm. This step enables the team to accurately assess the dose delivered to the patient each day.
First clinical implementation
The researchers employed their DAPT protocol in five adults with tumours in rigid body regions, such as the brain or skull base. As this study was designed to demonstrate proof-of-principle and ensure clinical safety, they specified some additional constraints: only the last few consecutive fractions of each patient’s treatment course were delivered using DAPT; the plans used standard field arrangements and safety margins; and the template and fallback plans were kept the same.
“It’s important to note that these criteria are not optimized to fully exploit the potential clinical benefits of our approach,” the researchers write. “As our implementation progresses and matures, we anticipate refining these criteria to maximize the clinical advantages offered by DAPT.”
Across the five patients, the team performed DAPT for 26 treatment fractions. In 22 of these, the online adapted plans were chosen for delivery. In three fractions, the fallback plan was chosen due to a marginal dose increase to a critical structure, while for one fraction, the fallback plan was utilized due to a miscommunication. The team emphasize that all of the adapted plans passed the online QA steps and all agreed well with the log file-based dose calculations.
The daily adapted plans provided target coverage to within 1.1% of the planned dose and, in 92% of fractions, exhibited improved dose metrics to the targets and/or organs-at-risk (OARs). The researchers observed that a non-DAPT delivery (using the fallback plan) could have significantly increased the maximum dose to both the target and OARs. For one patient, this would have increased the dose to their brainstem by up to 10%. In contrast, the DAPT approach ensured that the OAR doses remained within the 5% threshold for all fractions.
Albertini emphasizes, however, that the main aim of this feasibility study was not to demonstrate superior plan quality with DAPT, but rather to establish that it could be implemented safely and efficiently. “The observed decrease in maximum dose to some OARs was a bonus and reinforces the potential benefits of adaptive strategies,” she tells Physics World.
Importantly, the DAPT process took just a few minutes longer than a non-adaptive session, averaging just above 23 min per fraction (including plan adaptation and assessment of clinical goals). Keeping the adaptive treatment within the typical 30-min time slot allocated for a proton therapy fraction is essential to maintain the patient workflow.
To reduce the time requirement, the team automated key workflow components, including the independent dose calculations. “Once registration between the daily and reference images is completed, all subsequent steps are automatically processed in the background, while the users are evaluating the daily structure and plan,” Albertini explains. “Once the plan is approved, all the QA has already been performed and the plan is ready to be delivered.
Following on from this first-in-patient demonstration, the researchers now plan to use DAPT to deliver full treatments (all fractions), as well as to enable margin reduction and potentially employ more conformal beam angles. “We are currently focused on transitioning our workflow to a commercial treatment planning system and enhancing it to incorporate deformable anatomy considerations,” says Albertini.
Researchers at Tel Aviv University in Israel have developed a method to detect early signs of Parkinson’s disease at the cellular level using skin biopsies. They say that this capability could enable treatment up to 20 years before the appearance of motor symptoms characteristic of advanced Parkinson’s. Such early treatment could reduce neurotoxic protein aggregates in the brain and help prevent the irreversible loss of dopamine-producing neurons.
Parkinson’s disease is the second most common neurodegenerative disease in the world. The World Health Organization reports that its prevalence has doubled in the past 25 years, with more than 8.5 million people affected in 2019. Diagnosis is currently based on the onset of clinical motor symptoms. By the time of diagnosis, however, up to 80% of dopaminergic neurons in the brain may already be dead.
The new method combines a super-resolution microscopy technique, known as direct stochastic optical reconstruction microscopy (dSTORM), with advanced computational analysis to identify and map the aggregation of alpha-synuclein (αSyn), a synaptic protein that regulates transmission in nerve terminals. When it aggregates in brain neurons, αSyn causes neurotoxicity and impacts the central nervous system. In Parkinson’s disease, αSyn begins to aggregate about 15 years before motor symptoms appear.
Importantly, αSyn aggregates also accumulate in the skin. With this in mind, principal investigator Uri Ashery and colleagues developed a method for quantitative assessment of Parkinson’s pathology using skin biopsies from the upper back. The technique, which enables detailed characterization of nano-sized αSyn aggregates, will hopefully facilitate the development of a new molecular biomarker for Parkinson’s disease.
“We hypothesized that these αSyn aggregates are essential for understanding αSyn pathology in Parkinson’s disease,” the researchers write. “We created a novel platform that revealed a unique fingerprint of αSyn aggregates. The analysis detected a larger number of clusters, clusters with larger radii, and sparser clusters containing a smaller number of localizations in Parkinson’s disease patients relative to what was seen with healthy control subjects.”
The researchers used dSTORM to analyse skin biopsies from seven patients with Parkinson’s disease and seven healthy controls, characterizing nanoscale αSyn based on quantitative parameters such as aggregate size, shape, distribution, density and composition.
Creating a super-resolution image A skin biopsy is taken from patients and healthy control subjects, imaged with standard microscopy and then with super-resolution microscopy. Each circle in the image on the far right represents a single protein: red, alpha-synuclein; green, neuronal protein. (Courtesy: CC BY/Front. Mol. Neurosci. 10.3389/fnmol.2024.1431549)
Their analysis revealed a significant decrease in the ratio of neuronal marker molecules to phosphorylated αSyn molecules (the pathological form of αSyn) in biopsies from Parkinson’s disease patients, suggesting the existence of damaged nerve cells in fibres enriched with phosphorylated αSyn.
The researchers determined that phosphorylated αSyn is organized into dense aggregates of approximately 75 nm in size. They also found that that patients with Parkinson’s disease had a higher number of αSyn aggregates than the healthy controls, with larger αSyn clusters (75 nm compared with 69 nm).
“Parkinson’s disease diagnosis based on quantitative parameters represents an unmet need that offers a route to revolutionize the way Parkinson’s disease and potentially other neurodegenerative diseases are diagnosed and treated,” Ashery and colleagues conclude.
In the next phase of this work, supported by the Michael J. Fox Foundation for Parkinson’s Research, the researchers will increase the number of subjects to 90 to identify differences between patients with Parkinson disease and healthy subjects.
“We intend to pinpoint the exact juncture at which a normal quantity of proteins turns into a pathological aggregate,” says lead author Ofir Sade in a press statement. “In addition, we will collaborate with computer science researchers to develop a machine learning algorithm that will identify correlations between results of motor and cognitive tests and our findings under the microscope. Using this algorithm, we will be able to predict future development and severity of various pathologies.”
“The machine learning algorithm is intended to spot young individuals at risk for Parkinson’s,” Ashery adds. “Our main target population are relatives of Parkinson’s patients who carry mutations that increase the risk for the disease.”