Skip to main content

Make or break: building soft materials with DNA

Call me naive, but until a few years ago I had never realized you can actually buy DNA. As a physicist, I’d been familiar with DNA as the “molecule of life” – something that carries genetic information and allows complex organisms, such as you and me, to be created. But I was surprised to find that biotech firms purify DNA from viruses and will ship concentrated solutions in the post. In fact, you can just go online and order DNA, which is exactly what I did. Only there was another surprise in store.

When the DNA solution arrived at my lab in Edinburgh, it came in a tube with about half a milligram of DNA per centimetre cube of water. Keen to experiment on it, I tried to pipette some of the solution out, but it didn’t run freely into my plastic tube. Instead, it was all gloopy and resisted the suction of my pipette. I rushed over to a colleague in my lab, eagerly announcing my amazing “discovery”. They just looked at me like I was an idiot. Of course, solutions of DNA are gloopy.

I should have known better. It’s easy to idealize DNA as some kind of magic material, but it’s essentially just a long-chain double-helical polymer consisting of four different types of monomers – the nucleotides A, T, C and G, which stack together into base pairs. And like all polymers at high concentrations, the DNA chains can get entangled. In fact, they get so tied up that a single human cell can have up to 2 m of DNA crammed into an object just 10 μm in size. Scaled up, it’s like storing 20 km of hair-thin wire in a box no bigger than your mobile phone.

But if DNA molecules stayed horribly entangled, then nature would have a big problem. In particular, it would be impossible for chromosomes – long pieces of DNA containing millions of base pairs – to be constantly read and copied. And if that didn’t happen, then cells would be unable to make proteins and multiply. Thanks to the wonders of evolution, nature has got round this problem by “engineering” special proteins that can change DNA’s shape, or “topology”, to get rid of the entanglements.

Left to its own devices, a typical human chromosome would take about 500 years to undo or “relax” its entanglements. But these clever proteins can speed up the process by, for example, allowing a DNA molecule to temporarily split up and then reform. These proteins are vital to the operation of biological cells – that’s why the DNA I bought online was so gloopy: it was a pure form that had no proteins to undo the entanglements.

Unfortunately, there can be an over-abundance of these proteins in certain cancer cells, which therefore multiply incredibly fast as the proteins remove the entanglements so efficiently. Indeed, some of the first and most effective anti-cancer drugs were those that could stop so-called “type 2 topoisomerase” proteins from getting rid of entanglements. These drugs have some nasty side effects as topoisomerase proteins also play a vital role in ordinary, healthy cells.

By combining our knowledge of polymer physics and molecular biology, we can exploit DNA’s soap-like behaviour to craft DNA-based soft materials that change shape over time

But would you believe me if I said that DNA’s ability to morph its architecture means that it behaves a bit like soap? The link between DNA and soap is certainly surprising. But by combining our knowledge of polymer physics and molecular biology, we can exploit this soapy feature to craft DNA-based soft materials that change topology over time. And by tweaking their topology, we can control their physical properties in unusual ways.

A wormy tale

To understand the link between DNA and soap, I should point out that soaps and shampoos consist of “amphiphilic” molecules, one part of which loves water and another part that hates it. These molecules don’t exist in isolation but group together to form larger structures, known as “micelles”. At low concentrations, they’re usually spherical, but at higher concentrations, the molecules can gang together to form long, worm-like micelles, with the water-hating parts of the molecules facing inside (figure 1a).

Ranging in size from nanometres to microns, these elongated, multi-molecule objects do strange things at high concentrations. In particular, just like DNA, they get entangled, increasing the fluid’s friction and making it harder to deform. In fact, the entanglements between worm-like micelles are what give your soap, shampoo, face cream or hair gel that pleasant, smooth hand-feel, which is something to think about next time you’re taking a bath or shower.

figure 1

Just like polymers, it turns out that worm-like micelles can also disentangle themselves by sliding apart (figure 1b). But they have other options too. That’s because worm-like micelles are continuously morphing: they break up, fuse or reconnect with their neighbours – no micelle is the same at any two points in time (figure 1c). This ever-changing feature wonderfully embodies the Greek philosopher Heraclitus’s concept of “panta rhei”, or “everything flows” (from which the term “rheology” for the study of flow is derived). Indeed, micelles almost seem like quasi-living objects, thanks to their ability to morph their architecture and, sometimes, even their topology.

This interplay between dynamic architecture and conventional relaxation can lead to some highly unusual flow properties, such as the viscosity of soaps dropping drastically when sheared. Indeed, this sudden loss of stickiness explains why hand lotions, shampoo and creams, which are viscous when left alone, can be easily squeezed out of tube with a narrow nozzle.

Breaking and reconnecting

So just like worm-like micelles in soaps, DNA molecules are constantly getting broken up and glued back together again with a new topology (figure 2). But there’s one big difference: the DNA needs to preserve its genetic sequence otherwise cells might die or diseases could be triggered. In soap, there’s no precise sequence of monomers in micelles so they can be put back together in any order. Nature, however, requires proteins to perform topological operations on DNA while maintaining the original information (the DNA sequence) intact.

This has a fundamental impact on how topological operations are performed on DNA. Unlike worm-like micelles – where the operations can occur at random anywhere along the micelle and at any time – the topological changes on DNA have to happen at the right place and the right time (they have to be “regulated” as biologists love to say). It’s a mind-blowing concept – and one that I’ll be spending the next five years trying to artificially reproduce, to create a new generation of materials.

figure 2

To break DNA, for example, you need “restriction enzymes”, which cut the chain only where a certain DNA sequence is recognized. Topoisomerase proteins, meanwhile, have to be precisely positioned at certain locations on chromosomes where entanglements and mechanical stress often accumulate. Similarly, when two pieces of DNA reconnect and recombine – for example when parental genetic material is shuffled in gametes (the precursor of egg and sperm cells) – the process is tightly regulated in space and time to avoid aberrant chromosomes in cells. It’s almost as if DNA (thanks to proteins) is a smart worm-like micelle.

While all this may sound rather esoteric, it turns out that when the US microbiologist Hamilton Smith discovered the first restriction enzyme in the 1970s, he didn’t use any fancy biological techniques – but simply carried out accurate viscosity measurements. Having extracted DNA from a virus and mixed it with the insides of a bacterium, he saw that the viscosity of the DNA solution fell with time; the runnier liquid meant that the DNA must have been cut by an enzyme in the bacterium. Smith won the 1978 Nobel Prize for Physiology or Medicine for his efforts and it’s humbling to think it was all done with a simple viscosity experiment that has its roots in physics.

DNA and nanotechnology

I’m definitely not the only person to see the potential of DNA as an advanced polymer, rather than just as genetic material. Over the last two decades, researchers have developed lots of new, DNA-based materials, such as hydrogels and nano-scaffolds, that could, for example, grow bones, tissues, skin and cells, using the unique properties of DNA to encode information. Recently, there’s also been lots of work on “DNA origami”, in which the information along the DNA chain is now stored in 3D shapes (figure  3a). Indeed, we could even see nano-robots or nano-machines made from DNA.

What excites me about this line of research is that solutions of DNA, functionalized by the presence of proteins that can change DNA’s topology in time, may yield novel “topologically active” complex fluids that respond to external stimuli. These fluids and nanomaterials would exploit the information-storing abilities of DNA to form complex 3D shapes or hybrid scaffolding with the responsiveness, plasticity and precision endowed by specialized proteins (figure 3b). For example, adding restriction enzymes that can cut the DNA at specific sequences could allow stiff and robust DNA-based scaffolds to be degraded as soon as they are no longer needed. That could be useful if you’re using a scaffold to, say, regenerate a bone in a patient’s body: once the scaffold is not needed any more, you can get rid of it.

figure 3

At the same time, adding topoisomerase to an ensemble of DNA plasmids (circular DNA) can create a gel, in which the rings of DNA are joined together like the rings on the logo of the modern-day Olympic Games (figure 3c). These “Olympic gels” have proved impossible to synthesize in the lab despite decades of trying, yet nature has been doing so for millions of years.

In fact, I find it amazing that a type of unicellular organism called trypanosomes base their very existence on this Olympic gel. In particular, part of their genome takes the form of a giant network in which each DNA minicircle is linked to about other three others nearby to form an architecture that looks a bit like medieval chain mail. What’s even more fascinating is that this topological structure is continually splitting up and reassembling correctly at each cell division.

Interdisciplinary research from the bottom up

Apart from their intrinsic scientific interest, studying such biological structures will also help us design a new generation of self-assembled topological materials. These complex, DNA-based materials hold great technological promise, but to make progress we need multidisciplinary teams of physicists, chemists and biologists working together. What’s more, they will have to work from the bottom up, exploring basic principles for curiosity’s sake, and not only trying to solve specific technological problems that industry faces.

One notable success story in this regard, at least here in the UK, has been the creation of the Physics of Life network, led by the physicist Tom McLeish, which has seen the country’s research councils invest in this area. Now bearing fruit, I hope it’s the start of a stable, long-term, interdisciplinary programme of support. The Biological Physics Group of the Institute of Physics, which publishes Physics World, is also playing a key role in encouraging more groups to embrace this multidisciplinary approach at the interface between soft matter and biological physics.

However, we still need more top-quality journals that recognize high-value interdisciplinary research of this kind, while research centres that cut across traditional academic disciplines will be vital too. It is an exhilarating field to be in, where everyone – no matter where they are in their career – learns something new every day. My hope is that in 10 or 20 years’ time, scientists who are starting out in their careers will no longer feel obliged to explore only one specific discipline or to choose between theoretical and experimental work. Instead, it would be great if they could simply satisfy their scientific curiosity no matter what background they are from. For if they do that, who knows what we might find next?

How do surgical face masks affect functional MRI measurements?

As the COVID-19 pandemic continues across the world, the wearing of face masks indoors has become a requirement to help reduce virus transmission. Facial coverings are also worn in MRI scanners during data acquisition to keep participants safe. However, it was unclear what impact this could have on measured brain signals. Now, researchers at Stanford University have investigated the effect of wearing a face mask on functional MRI (fMRI) signals during scanning. They describe their work in NeuroImage.

Functional MRI measures the blood-oxygen-level-dependent (BOLD) response of the brain due to changes in blood flow during activation. The BOLD response is sensitive to the concentrations of oxygen and carbon dioxide (CO2) in the blood. Wearing a facial covering mixes the expired and inspired air streams. As you breathe out CO2, this mixing increases the amount of inspired CO2, resulting in mild hypercapnia (elevated blood CO2 levels). This hypercapnia increases cerebral blood flow and therefore elevates the measured BOLD signal, resulting in greater contrast in the fMRI compared with that seen in the absence of hypercapnia.

The impact on BOLD signals

The Stanford team performed task-based neuroimaging studies using a “block design”, in which an activity is performed or a stimulation is given during an “on-window”. This is followed by an “off-window”, where the action or stimulation ceases.

In this study, the researchers ran two block designs simultaneously. The first involved a short 15-s sensory-motor task, which stimulated the brain’s auditory, visual and sensorimotor regions concurrently. Throughout the task, they delivered fresh air to the participant through a nasal cannula in on–off cycles of 90 s. They conducted the experiment twice with each participant, once with a surgical face mask and once without. The cannula manipulated the gas content of the inspired air during the mask-on state, preventing CO2 build-up; it had a minimal effect on CO2 levels when the mask was off.

The team also recorded end tidal CO2 (ETCO2) levels in a separate session outside of the MRI scanner, to measure the effect of the mask on hypercapnia for each participant.

The researchers analysed data from eight healthy participants using a general linear model with two variables: one describing the sensory-motor task, and the other the nasal cannula air supply. The resulting group activation maps from the sensory-motor task, which indicate the areas of the brain that are active during the task on-window, showed no significant differences between the mask-on and mask-off states. These results demonstrate that task-activation can be reliably detected while the participant wears a mask.

The baseline BOLD signal, which the team analysed using the nasal cannula air cycles, showed a significant difference between the mask-on and mask-off states. The results demonstrated that the face mask induced an average baseline signal shift of 30.0%, with the grey matter across the brain showing an evident deactivation (observed via an increase in signal without the air supply) in the group activation maps. The measured ETCO2 showed an average increase of 7.4%, confirming the predicted rise in inspired CO2 concentration with mask use.

This topical study provides some clarity to the neuroimaging community regarding the impact of face masks on data collected throughout the pandemic. The insignificant difference measured between the task-activated signal with masks on or off supports the safe continuation of task-based fMRI studies in clinical and research settings while following mask regulations.

Motorized water phantom underpins ‘gold-standard’ QA for MR/RT systems

It’s still relatively early days in terms of wide-scale clinical adoption, but the benefits of the new generation of MR-guided radiotherapy (MR/RT) systems are already clear. Think real-time image-guided adaptation of radiation delivery – more effectively treating the tumour target while sparing healthy tissue and minimizing damage to adjacent organs at risk and critical structures. On the flip side, the operational challenges of a hybrid MR-Linac treatment machine are also evident – not least when it comes to the implementation of efficient and streamlined protocols for quality assurance (QA) with an MRI scanner integrated into the radiotherapy workflow.

Fundamental physics complicates matters further, with the MRI scanner’s magnetic field having a non-trivial impact on dose deposition and distribution in the irradiated volume – most importantly within the patient, but also the radiation dosimeters used for QA purposes. The QA challenge doesn’t end there. Conventional water phantoms, essential for the commissioning and annual verification of radiotherapy systems, are not suited to the unique MR-Linac environment – chiefly because, for safety reasons, the use of ferromagnetic materials is prohibited within the strong MRI magnetic field.

To address this troublesome bottleneck, laser and radiotherapy QA specialist LAP has developed a 3D and MR-compatible motorized water phantom tailored specifically for the commissioning and QA of MR-Linacs. The newly launched THALES 3D MR SCANNER is MR-conditional – i.e. all system components are made from non-ferromagnetic materials certified for use within the MRI scanner’s magnetic field – while the automated set-up (which takes under 15 minutes to prepare) and predefined measurement sequences are intended to help the medical physics team save time and simplify their test routines during system commissioning and annual or biannual QA.

“The THALES 3D MR SCANNER provides a gold-standard dose accuracy check for MR/RT users,” claims Thierry Mertens, a physicist and LAP’s business development manager for healthcare. In the radiation oncology clinic, the phantom will be used alongside a portfolio of QA tools – some providing daily, weekly and monthly QA checks, with the THALES 3D MR SCANNER reserved for system commissioning and ongoing verification of dose delivery after any major upgrades to the MR-Linac. “In this way,” adds Mertens, “the water phantom will give the medical physicist peace of mind, ensuring that their MR/RT system is accurately calibrated and supporting accurate verification of delivered dose to the patient.”

Collaborative development

Significantly, the THALES 3D MR SCANNER is now cleared for full commercial release in the US after receiving 510(k) approval from the US Food and Drug Administration (FDA), while the product’s CE mark provides a green light for roll-out to clinical customers in the European Economic Area. In both regions, the phantom comes with a yearly maintenance visit, software and hardware updates, and a configurable multiyear warranty.

Thierry Mertens

These commercial milestones are the culmination of a five-year product development effort that began with LAP’s acquisition of Euromechanics Medical GmbH in summer 2016 – a purchase that, in large part, was driven by the latter’s active collaboration with MR/RT pioneer ViewRay to develop an MR-compatible phantom for the then-prototype MRIdian treatment system. That product collaboration accelerated post-acquisition, with ViewRay keen to promote the development of an independent QA ecosystem around its MRIdian machine. “Not surprisingly,” says Mertens, “the THALES 3D MR SCANNER is a perfect fit when commissioning the beam model of the MRIdian system, supporting end-users with an efficient process for accurate dosimetry measurements.”

In parallel, Mertens and his colleagues at LAP broadened the product development effort on their water phantom to gather insights from clinical early-adopters of the ViewRay MRIdian system – most notably the University Medical Centre (UMC) in Amsterdam (Netherlands), University Clinic Heidelberg (Germany) and the Henry Ford Cancer Institute in Detroit (US). “The voice of the clinical customer was fundamental to our requirements-gathering and optimization of the phantom design, usability and functionality,” says Mertens.

Put another way: as a QA vendor, it was incumbent on LAP to understand what Mertens calls “the A to Z of the clinical workflow”, thereby ensuring that the hardware, software, electronics and components of the THALES 3D MR SCANNER are all optimized versus the ViewRay MR-Linac design. “This continuous-improvement mindset is key,” adds Mertens. “The phantom has been shaped by clinical physicists at the sharp-end of treatment delivery and we continue to incorporate their feedback from the field to inform and iterate our product design.”

The view from the sharp end

While commercial roll-out of the THALES 3D MR SCANNER is now the priority, it’s also worth noting that further innovation is in the works. A custom version of the phantom to support Varian’s Halcyon image-guided radiotherapy system and ETHOS, the vendor’s new AI-enabled adaptive radiotherapy machine, is expected later this year. Down the line, a modified water phantom is also planned to provide compatibility with Elekta’s Unity MR-Linac machine.

Out in the clinic, meanwhile, Mertens predicts a variety of use-cases for the THALES 3D MR SCANNER through 2021. For starters, there are new ViewRay customers who need a suite of QA tools to support the commissioning and acceptance of their MRIdian systems. “Medical physicists are ultimately accountable,” he notes, “and they want independent QA and verification tools to confirm that what they’re getting from the radiotherapy OEM is operating as per the specification.”

Given the relative novelty of MR/RT, it’s inevitable that many clinics are newcomers to the field and, as such, are still finding their way when it comes to the unique functionality and nuances of MR-Linac machines. “As they ramp up their MR/RT programmes, I can see these customers using the THALES 3D MR SCANNER more frequently – maybe once a month at the outset – to explore the impact of the magnetic field and really get to know their treatment system,” adds Mertens.

Over time, though, it’s likely that the water phantom will be needed less often – perhaps once or twice a year as part of standard machine QA and after any significant upgrade to the MR-Linac hardware or software. Mertens concludes: “This is where the water phantom really comes into its own, helping the medical physicist with rigorous beam data and beam model visualizations to verify that the delivered radiation as it applies to the patient is indeed correct.”

Light could levitate micron-thin aircraft in Earth’s mesosphere

A new light-driven levitation technique could soon enable tiny, low-cost aircraft to achieve the first sustained flight in the Earth’s mesosphere. Mohsen Azadi and colleagues at the University of Pennsylvania University exploited the effect of photophoresis, combined with an intricately shaped light beam, to levitate thin mylar disks at low pressure in a vacuum chamber. Their microflyers could soon allow researchers to explore one of the most poorly understood parts of Earth’s atmosphere in unprecedented detail.

Situated between 50-80 km above Earth’s surface, the mesosphere is a no-man’s-land for sustained flight. At these altitudes, air density is too low to generate lift for airplanes, but still high enough that space-based satellites will experience unsustainable drag and burn. One emerging solution lies in the use of light-driven motion, known as photophoresis, as an alternative propulsion mechanism. The effect has been widely observed in small particles such as atmospheric aerosols. When illuminated by suitably intense light beams, they will move due to non-uniform temperature distributions that form in the air surrounding them.

Azadi and colleagues aim is to use photophoresis to levitate much larger objects, starting with a design based on a circular mylar film that is 6 mm in diameter and 500 nm in thick. On the underside of the disk, they deposited a 300 nm-thick layer of tangled carbon nanotubes, creating a network of microscopic air traps. They then placed their structure inside a vacuum chamber, and reduced the air pressure to as low as 10 Pa – just a small fraction of the atmospheric pressure experienced on Earth’s surface (about 100 kPa).

Net upward force

To levitate the disks, the researchers illuminate them with a light intensity comparable to sunlight, causing them to heat the sparse surrounding air. On their undersides, air molecules trapped by the carbon nanotubes are heated for longer than those on the upper sides of the disks. As a result, these molecules reached higher velocities when they finally escaped from the traps, generating a net upward force.

To control the flight paths of their aircraft, Azadi and colleagues designed an optical trap using a specially shaped light field. At the centre of the beam, light intensity was just high enough to levitate the disks. Yet surrounding this region, a ring of higher light intensity pushed the disks back towards the centre when their paths deviated. Finally, researchers constructed a theoretical model from their results, allowing them to predict that the lifting force generated by a microflyer could be many times larger than its weight.

With much more development, the concept could enable low-cost aircraft to achieve sustained flight at altitudes ranging from 50-100 km, and even carry payloads as large as 10 mg. Their cargo-carrying abilities may increase even further if hundreds of microflyers were joined together by lightweight carbon fibres – enabling them to carry equipment such as smart dust sensors, and devices to track atmospheric circulation patterns. Ultimately, this would open broad new areas of research into one of the least understood parts of Earth’s atmosphere.

The research is described in Science Advances.

Long-distance space travel: addressing the radiation problem

A team of US and Netherlands-based scientists has published a review paper highlighting ways to protect astronauts from the negative cardiovascular health impacts associated with exposure to space radiation during long-distance space travel.

Cardiovascular impacts

Space radiation is currently regarded as the most limiting factor for long-distance space travel because exposure to it is associated with significant negative effects on the human body. However, data on these effects are currently only available for those members of the Apollo programme that travelled as far as the Moon – too small a number from which to draw any significant conclusions about the effects of the space environment on the human body. In addition, although exposure to space radiation, including galactic cosmic rays and solar “proton storms”, has previously been linked to the development of cancer and neurological problems, data on the consequences of space radiation exposure for the cardiovascular system are lacking.

In an effort to address these limitations, researchers based at the University Medical Center (UMC) Utrecht, Leiden University Medical Center, Radboud University and the Technical University Eindhoven in the Netherlands, as well as Stanford University School of Medicine and Rice University in the US, have carried out an exhaustive review of existing evidence to establish what we know about the cardiovascular risks of space radiation. They present their findings in the journal Frontiers in Cardiovascular Medicine.

Manon Meerman

As first author Manon Meerman, a graduate student at UMC Utrecht, explains, the majority of current knowledge comes from studies of people who have received radiotherapy for cancer, where cardiovascular disease is a common side-effect, or from animal and cell culture studies that demonstrate the major negative effects of exposure to space radiation on the cardiovascular system. Such effects include fibrosis, or stiffening, of the myocardium and accelerated development of atherosclerosis, the main cause of myocardial and cerebral infarction.

“You can argue that if NASA, ESA and other space agencies want to expand space travel, both in terms of location – for example, to Mars – and time, astronauts will be exposed to the specific space environment for longer periods of time. However, we currently do not know what the effects of exposure to these space-specific factors are,” says Meerman.

“NASA currently sees space radiation as the most limiting factor for long-distance space travel, but the exact short- and long-term effects are not fully understood yet. We are therefore exposing astronauts to extremely uncertain risks. However, research into the effects of space radiation has increased over the past few years and we’re constantly gaining more knowledge on this topic,” she adds.

 Space radiation-induced changes

Advanced models

According to Meerman, another important factor in this discussion is the fact that we currently cannot adequately protect astronauts from space radiation. Shielding with radiation-resistant materials is very difficult since exposure levels are far higher than on Earth and the type of radiation is much more penetrating. Pharmacological methods of protecting the cardiovascular system are hampered by the fact that no effective radioprotective compounds have yet been approved.

“The most important conclusion is that we actually do not know enough about the exact risks that long-distance space travel pose for the human body. Therefore, in our opinion, we should keep looking for new ways to protect astronauts from the harmful space environment before we expand human space travel,” says Meerman.

Moving forward, Meerman stresses that research on the effects of space radiation should incorporate advanced models that provide a more accurate representation of the cardiovascular impacts of space radiation – such as those based on lab-created human cardiac tissue and organ-on-a-chip testing technologies. Studies should also examine the effects of combinatorial exposure to different space radiation particles, as well as combined exposure to space radiation components and other space-specific factors, like microgravity, weightlessness and prolonged hypoxia.

“These are all crucial studies to be conducted in order to really understand the risks we’re exposing astronauts to,” says Meerman. “Therefore, we believe we are not there yet and we should debate whether it is safe to expand human space travel significantly.”

Interconnected single atoms could make a ‘quantum brain’

A network of interconnected atoms could be used to construct a “quantum brain” that mimics how a real brain learns. The new system consists of an array of cobalt atoms on a substrate of black phosphorous, and its developers at Radboud University in the Netherlands say that it could have applications in artificial intelligence.

The human brain contains some 100 billion neurons in connected networks. Whenever we perform a task, these neurons receive electrical signals from other neurons in their network via tiny junction-like structures known as synapses. Once the sum of the signals across the synapses reaches a certain critical value, the neuron “fires” by sending a series of voltage spikes to other neurons. The strength of the connection between different neurons is known as the synaptic weight and can change over time as we learn new things and perform new tasks.

Many of today’s brain-inspired, or neuromorphic, devices use machine learning – the process by which a computer uses software, or algorithms, to train on a given set of examples – to autonomously develop the ability to perform a new task. One such machine-learning model is known as a Boltzmann machine. In physical terms, a Boltzmann machine is an interacting (Ising) system of spins in which randomly fluctuating spins (or magnetic moments) represent neurons.

Magnetic atoms on surfaces are emerging as a platform for realizing such a machine, as it is possible to use them to create tuneable networks of spins that display the necessary random motion. The problem is that magnetic exchange interactions between these atoms usually have a short range, which limits the number of connections to other atoms/neurons that can be formed.

Individually-coupled cobalt atoms

Researchers led by Alexander Khajetoorians and Hilbert Kappen have now created a self-adapting Boltzmann machine by exploiting the orbital dynamics of individually-coupled cobalt atoms placed on black phosphorus. The new work builds on earlier experiments in which they discovered that it is possible to store binary bits of information (0s and 1s) in the electronic state of a single cobalt atom when it is placed on this two-dimensional semiconductor and a voltage is applied to the atom.

Khajetoorians, Kappen and colleagues used the tip of a scanning tunnelling microscope to position the cobalt atoms on the 2D material and create long-range coupling between the atoms. They found that when they applied a voltage to the atom network, it produced an output signal that comes from electrons “hopping” from one cobalt atom to another. This output signal somewhat resembles the firing produced by neurons.

Synaptic weight change

As well as observing spiking behaviour in the output signals, the researchers noticed that the ensembles of cobalt atoms behaved differently depending on what input they received. For example, when the material was stimulated over a longer period with a certain voltage, the synapse-like memory-bearing atoms autonomously reorganized in response – in effect, changing their synaptic weight. “The material learned by itself,” Khajetoorians says.

Wolfram Pernice, a physicist and nanotechnologist at the University of Münster in Germany who was not involved in the study, calls the new work “very nice”. “Particularly exciting is the fact that the learning process is implemented directly in the material,” he tells Physics World. “Using individual atoms to implement artificial neurons and synapses is very elegant.”

Khajetoorians and colleagues say they now plan to scale up their system into a larger network of cobalt atoms. They would also like to study other magnetic atoms in an effort to understand why these atom networks behave the way they do. They report their findings in Nature Nanotechnology.

Topological source emits light with high and multiple orbital angular momenta

A compact, integrated light source that simultaneously produces multiple laser beams with different, very high orbital angular momenta has been unveiled by US researchers. The technology may mark a significant step towards orbital angular momentum multiplexing at scale, which could potentially vastly increase Internet speeds.

Interactions between light signals in an optical fibre are negligible, which means that multiple signals can travel down the same fibre simultaneously in a process called multiplexing. As the world’s hunger for faster data transfer grows insatiably, various schemes to use multiplexing to squeeze more data into fibres have been developed.

To ensure that the signals do not get mixed up, they must be sent using independent – or orthogonal – channels. One tantalizing and seemingly simple option is to encode data into the orthogonal angular momentum states of photons: with each state being an independent channel.

The most familiar type of angular momentum photons carry is circular polarization, or spin. This involves the rotation of the axes of the electric and magnetic fields around the wavefront as a photon propagates. Several telecommunications companies are working to incorporate “polarization division multiplexing” into their systems. However, circular polarization has only two orthogonal states so it can at best double a network’s capacity.

Infinite multiplexing

Photons also carry orbital angular momentum (OAM), which involves the wavefronts themselves coiling around the axis of propagation. Symmetry considerations require that this be quantized, but there is no other restriction on how tightly the wavefronts can coil. The OAM – and therefore the potential number of signals multiplexed – is, in principle, infinite.

However, multiplexing signals using OAM in a realistic setup has been tricky. Although light sources can generate beams with high OAM, changing the OAM requires moving parts such as metasurfaces, which is not feasible for a high-speed telecommunications source outside the laboratory. While lasers can achieve ultrafast switching, they produce relatively low OAM.

In 2017, Boubacar Kanté and colleagues at the University of California, San Diego showed how an arbitrarily-shaped laser cavity subjected to a magnetic bias could be produced at the interface between two photonic crystals, allowing light to snake its way around the path but not to escape it. The researchers attributed this to the creation of topologically-protected light paths by the photonic quantum Hall effect.

Edge states

Kanté is now at the University of California, Berkeley where his team is focused on making the light travel along circular paths in the cavity by patterning the photonic crystals to create topologically distinct edge states. Whereas the previous work kept the light confined between the photonic crystals, these edge states are designed to transmit light in one direction.

“At every point in the cavity, these topological rings are actually leaking some of their energy out,” says Kanté.

In their latest experiment, the researchers produced a photonic crystal that simultaneously supported edge states with OAMs of 100, 156 and 276 clockwise, anticlockwise and clockwise respectively. But they could, in principle, have generated any OAM beams with a sufficiently complex resonator. The device is fully integrated with no moving parts – the magnetic field is supplied by etching the quantum wells onto a magnetic substrate.

“This is the first time in history that a laser about the size of a human hair can generate an OAM greater than 200…We can now directly multiplex any number of OAMs of any charge in a simple and compact device,” says Kanté.

Liang Feng of the University of Pennsylvania believes the work marks an important step: “The underlying concept about the generation of the OAM beam is very similar to what we demonstrated in 2016,” he says. “First you need to have unidirectional light propagation in the platform and then you apply the appropriate phase matching condition to guide the light out. In our ring lasers we had confined waveguide modes, so we used gratings to help couple the light out. The beauty of this is that, if you have an angular grating inscribed on the ring cavity, the order number you could generate would be very limited, but in this case the order generated can be huge.”

Andrea Alù of the City University of New York is also impressed by the potential applications of the work: “I believe the final result is interesting, and the response is something people are after,” he says. He notes, however, that the magnetic substrate could complicate fabrication and integration at scale, and may not be strictly necessary: “The same principle may be applied to a reciprocal system which supports these edge states without relying on the quantum Hall effect,” he says: “The open question is what does this magnetic effect buy the authors?”

The research is described in Nature Physics.

Winner of Dance Your PhD video contest is very atmospheric, the physics of cooking an egg

Does your PhD thesis make you want to dance? An atmospheric scientist from the University of Helsinki has bagged this year’s top prize of $2750 in the annual Dance Your PhD contest. Organized by the American Association for the Advancement of Science and sponsored by the artificial intelligence company Primer, the competition is in its 13th year and asks postgraduate students to explain their research through dance.

With the help of several friends, Jakub Kubecka brought his studies to life with a rap about atmospheric molecular clusters. With trash-talking lyrics like “I’m the first author, you’re just et al”, Kubecka and his mates also carried out some crude dance moves accompanied by computer animations and drone footage.

“To prepare for recording the lyrics, I was running with headphones playing the music at least 30 times per day for the whole month to get it into my blood,” says Kubecka. “We always stayed close to our main goal of showing non-scientific muggles that science can be fun, silly and exciting. And of course, we also didn’t want to miss our opportunity of spitting some scientific roasts.” You can watch the video — filmed while honouring local COVID-19 restrictions – above.

Linking proteins

One thing that has always fascinated me about cooking eggs is why unlike most other liquids, egg whites solidify when heated rather than becoming runnier or evaporating. The reason is that the protein molecules unfold in the heat, enabling them to link up to form a solid. Now Nafisa Begam at the University of Tübingen in Germany and colleagues have used X-ray scattering to gain a better understanding of this process.

One thing they found is that after the white solidified, which takes a few minutes, there is no further solidification. You can read more about their study in “Watching an egg cook with X-rays”, and who knows, maybe it will help you make the perfect poached egg.

Prompt gammas enable sub-millimetre-resolution, multi-isotope PET

Preclinical imaging systems such as positron emission tomography (PET) scanners provide an essential tool for studying disease and assessing therapies, most commonly in mice. But as mouse organs are roughly an order of magnitude smaller than their human counterparts, sub-millimetre resolution is essential for accurate imaging and quantitative measurements within the animal’s organs and tumours.

Many radioisotopes used as PET tracers, however, emit positrons with a large range (several millimetres) and cannot be imaged at sufficiently high resolution. It is also extremely difficult to image more than one PET isotope at a time, as they all create annihilation photons with equal energy.

Now a team headed up at TU Delft aims to solve both of these challenges at once – by utilizing the prompt gamma photons that are co-emitted with positrons by many radioisotopes. Using the VECTor scanner from MILabs, the researchers demonstrate multi-isotope and sub-millimetre imaging of PET isotopes with large positron range, reporting their findings in Physics in Medicine & Biology

Exploiting prompt gammas

PET works by detecting a pair of 511 keV annihilation photons produced when a positron emitted by a radioisotope annihilates with an electron. Coincident detection of these photons enables localization of their source, by forming a line-of-response between the detectors. However, positrons with large ranges will travel in random directions away from the tracer molecule before annihilation, reducing image resolution and quantitative accuracy.

“In fact, a coincidence PET scanner performs tomography of positron annihilations instead of emissions: PAT instead of PET,” explains first author Freek Beekman. “This is not so much of an issue for [the PET isotope] 18F, due to its short positron range. But for many other isotopes important to medical research and diagnosis it results in, sometimes dramatic, blurring effects.”

Fortunately, many PET isotopes with long positron ranges also emit significant amounts of prompt gammas straight from the atom. Detecting these enables more accurate localization of the PET tracer molecules and improves the image resolution. What’s more, different PET isotopes emit prompt gammas of different energies, paving the way towards multi-tracer PET imaging.

Beekman and colleagues tested this approach using a VECTor6CT system equipped with three gamma detectors and a high-energy mouse collimator with 144 pinholes (0.7 mm diameter) organized in clusters of four. The use of a clustered pinhole collimator minimizes several image-degrading effects inherent to electronic collimation.

“VECTor is the only PET technology that can precisely collimate these high-energy prompt photons and detect them,” says Beekman. “A coincidence PET scanner relying on electronic collimation could detect them, but unfortunately without collimation to the so-called line-of-response because it needs two photons in opposite direction for this.”

To demonstrate multi-isotope PET, the researchers injected mice with both 124I-NaI and 18F-NaF before scanning the animals for 60 min. 124I has a mean positron range of 3.4 mm, with a maximum range of 11.7 mm, but also emits large amounts of 603 keV prompt gammas. By using only the 603 keV photons for image reconstruction, sub-millimetre structures in the mouse thyroid were easily resolved.

Using the same scan but a different energy window, the researchers reconstructed high-resolution 18F-NaF images from 511 keV photons. To remove contamination from 124I annihilation photons, they corrected the 18F images by subtracting the estimated 124I signal. They then merged the corrected 18F image with the 124I image to create a clear dual-isotope mouse image showing 124I uptake in tiny thyroid parts and 18F-NaF in bone structures.

Dual-isotope PET can reduce imaging time over two separate scans, limiting the time needed to keep the animal anaesthetized, as well as providing perfectly registered images of different tracer molecules.

Resolution limits

To assess image resolution, the researchers scanned a Derenzo phantom containing 0.45–0.85 mm-diameter rods filled with 124I. Comparing 124I images reconstructed using prompt gammas and annihilation photons (from the same scan) showed that the 0.75 mm rods could be clearly discerned using 603 keV photons, while the 511 keV photons did not resolve any of the rods. Simultaneous dual-isotope PET images of a phantom filled with a mix of 124I and 18F also resolved the 0.75 mm rods.

The team next imaged a quantification phantom with three compartments filled with: (1) 0.98 MBq of 124I; (2) 10.1 MBq of 18F; (3) a mix of 0.98 MBq of 124I and 10.1 MBq of 18F. The measured concentrations of 18F in compartments 2 and 3 were equal after cross-talk correction, as were the amounts of 124I in compartments 1 and 3, demonstrating high quantitative accuracy.

Quantification phantom

Finally, the researchers imaged a Derenzo phantom containing 89Zr, an important PET isotope with a mean positron range of 1.27 mm and abundant prompt gamma emission at 909 keV. Images based on 909 keV prompt gammas were far clearer than those using 511 keV photons, and could clearly resolve the 0.75 mm rods.

The team has received a grant from the Dutch Research Council (NWO) to develop algorithms that will further improve the images by better combining information from prompt and annihilation photons. “In addition, our partners in academia and pharmaceutical companies that have a VECTor/CT scanner are developing protocols to use this method in a large variety of new applications,” Beekman tells Physics World. “Meanwhile, we are also developing the next versions of the hardware.”

Atomic nuclei go for a quantum swing

Scientists routinely use laser light to control how an atom’s electrons move from one electronic state to another, but controlling an atom’s nuclear state is far more challenging. Researchers at the Max Planck Institute for Nuclear Physics in Heidelberg, Germany, have now used X-ray light to achieve coherent control over nuclear excitations for the first time. As well as contributing to a better understanding of quantum matter, the work could hasten the development of technologies such as ultraprecise nuclear clocks and batteries that can store huge amounts of energy.

Atomic nuclei are quantum systems in which the component protons and neutrons can quantum-mechanically “jump” from one nuclear quantum state to another when they gain or lose energy. The energy differences in these nuclear jumps are often six orders of magnitude larger than the jumps made by electrons within an atom’s electron shells, says team member Christoph Keitel. “A single quantum jump made by a nuclear component can thus pump up to a million times more energy (into the states) – or get it out again,” he explains. “This has given rise to the idea of nuclear batteries with an unprecedented storage capacity.”

Keitel adds that the quantum states of some atomic nuclei are also much more sharply defined than electronic quantum states. This means the jump frequencies are also more precise – something that could, in principle, be exploited to create nuclear clocks that are far more precise than the atomic clocks used for today’s precision timekeeping and navigation. These ultra-precise clocks could also be useful for fundamental physics studies such as investigations of whether the known physical constants of nature are indeed constant.

Precisely addressing and controlling jumps

Before such applications see the light of day, however, researchers need to find some way of precisely addressing and controlling these jumps. One such technique, which the Heidelberg team has been working on for more than 10 years, involves high-energy X-ray light.

In the present work, researchers led by Jörg Evers used pulses of light from the Nuclear Resonance Beamline ID18 at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France that they split into two using a “split-and-control unit”. The purpose of this unit is to delay one of these two pulses relative to the other.

Evers and colleagues sent the first pulse to a “test” target sample made from a stainless-steel foil 1μm thick. The steel in this foil is enriched to contain 95% of the “Mössbauer” isotope iron-57 (57Fe), which has a nuclear (magnetic dipole) transition at an energy of 14.4 keV. The second pulse follows the first after a time delay, and afterwards both pulses encounter the real sample. This sample is also made of stainless steel enriched with 57Fe atoms, but it is 2 μm thick.

Pushing a swing

The researchers explain that their first pulse contains a broad mix of frequencies and is extremely short-lived, lasting just 100 picoseconds (1 ps = 10-12 s). This pulse stimulates a quantum transition in the 57Fe atom nuclei. The second pulse is longer, at 141 nanoseconds, and its energy is precisely tuned to the same quantum transition. The time delay between the two pulses can be adjusted in a way that the researchers liken to pushing a person on a swing. While the first push causes the person to swing, or oscillate, back and forth, the second push either enhances the oscillation or slows it down depending on when it occurs within the oscillation’s phase.  The second pulse is thus, respectively, either more constructive or more destructive for the quantum state.

Achieving such a tightly controlled change in the quantum dynamics of an atomic nucleus is a technical feat that took the Heidelberg team years to achieve. Among other factors, it requires the delay of the second pulse to be stable on a time scale of just a few zeptoseconds (1 zs = 10-21 s). Only then can the two pulses work together to control nuclear excitations.

Spurred on by these results, which they report in Nature, the researchers now plan to explore possible applications of their new control scheme. “These include novel spectroscopy approaches and adaptive X-ray optics,” Evers tells Physics World.

Copyright © 2026 by IOP Publishing Ltd and individual contributors