Skip to main content

Local twist angles in graphene come into view

Stacking layers of two-dimensional materials on top of each other and varying the twist angle between them massively alters their electronic properties. The trick is to get the twist angle just right, and to know when you’ve done so. Researchers in China have now developed a technique that helps with the second part of this challenge. By allowing scientists to directly visualize the variations in local twist angles, the new technique shed light on the electronic structure of twisted materials and accelerate the development of devices that exploit their properties.

Graphene (a 2D form of carbon just one atom thick) does not have an electronic band gap. Neither does a pair of graphene layers stacked on top of each other. However, if you add another 2D material called hexagonal boron nitride (hBN) to the stack, a band gap emerges. This is because the lattice constant of hBN – a measure of how its atoms are arranged – is nearly the same as that of graphene, but not exactly. The slightly mismatched layers of graphene and hBN form a larger structure known as a moiré superlattice, and the interactions between nearby atoms in this superlattice allow a gap to form. If the layers are then twisted so that they are further misaligned, the lattice interactions weaken, and the band gap disappears.

Achieving such changes in conventional materials usually requires scientists to alter the materials’ chemical composition. Varying the twist angle between layers of a 2D material is an entirely different approach, and the associated possibilities kickstarted a new field of device engineering known as twistronics. The problem is that twist angles are hard to control, and if different areas of a sample contain unevenly distributed twisted angles, the sample’s electronic properties will vary from location to location. This is far from ideal for high-performance devices, so researchers have been exploring ways to visualize such inhomogeneities more precisely.

A new method based on sMIM

In the new work, a team led by Hong-Jun Gao and Shiyu Zhu of the Institute of Physics, Chinese Academy of Sciences, adapted a method called scanning microwave impedance microscopy (sMIM) that was recently developed by Zhixun Shen and colleagues at Stanford University in the US. The adapted method involves applying a range of gate voltages to the sample and analysing conductivity fluctuations in the sMIM data at different positions in it. “This process provides the gate voltages corresponding to moiré band gaps, which are indicative of fully filled electronic bands, directly unveiling details about the moiré superlattice and local twist angles,” Zhu explains.

When the researchers tested this method on high-quality samples of twisted bilayer graphene fabricated by their colleagues Qianying Hu, Yang Xu and Jiawei Hu, they were able to detect variations of twist angles directly. They also gleaned information on the conductivity of localized areas, and they characterized other electronic states such as quantum Hall states and Chern insulators by applying out-of-plane magnetic fields. “We made these measurements concurrently,” Zhu notes. “This allowed us to directly obtain quantum state information under different local twist angle conditions.”

The new technique revealed pronounced variations in the local twist angles of around 0.3° over distances of several microns, he adds. It also enabled the team to measure local conductivity, which is not possible with alternative methods that use single-electron transistors to measure compressibility or nanoSQUIDs to measure magnetic fields. What is more, for samples of twisted bilayer graphene covered by an insulating BN layer, the new method has a significant advantage over conventional scanning tunnelling microscopy, as it can penetrate the insulating layer.

Exploring novel quantum states

“Our work has revealed the local twist angle variation within and between domains of a twisted two-dimensional material,” Zhu tells Physics World. “This has deepened our understanding of the microscopic state of the sample, allowing us to explain many experimental phenomena previously observed in ‘bulk-averaging’ measurements. It also provides a way to explore novel quantum states that are difficult to observe macroscopically, offering insights from a microscopic perspective.”

Thanks to these measurements, the unevenness of local twist angles in twisted two-dimensional materials should no longer be a hindrance to the study of novel quantum states, he adds. “Instead, thanks to the rich distribution of local twist angles we have observed, it should now be possible to simultaneously compare various quantum states under multiple local twist angle conditions and band structure conditions in a single sample.”

The researchers now aim to extend their technique to a wider range of twisted systems and heterostructure moiré systems – for example, in materials like twisted bilayer MoTe2 and WSe2/WS2. They would also like to conduct bulk-averaging measurements and compare these results with local measurements using their new method.

Quantum Barkhausen noise detected for the first time

Researchers in the US and Canada have detected an effect known as quantum Barkhausen noise for the first time. The effect, which comes about thanks to the cooperative quantum tunnelling of a huge number of magnetic spins, may be the largest macroscopic quantum phenomena yet observed in the laboratory.

In the presence of a magnetic field, electron spins (or magnetic moments) in a ferromagnetic material all line up in the same direction – but not all at once. Instead, alignment occurs piecemeal, with different regions, or domains, falling into line at different times. These domains influence each other in a way that can be likened to an avalanche. Just as one clump of snow pushes on neighbouring clumps until the entire mass comes tumbling down, so does alignment spread through the domains until all spins point in the same direction.

One way of detecting this alignment process is to listen to it. In 1919, the physicist Heinrich Barkhausen did just that. By wrapping a coil around a magnetic material and attaching a loudspeaker to it, Barkhausen transformed changes in the magnetism of the domains into an audible crackling. Known today as Barkhausen noise, this crackling can be understood in purely classical terms as being caused by the thermal motion of the domain walls. Analogous noise phenomena and dynamics also exist in other systems, including earthquakes and photomultiplier tubes as well as avalanches.

Quantum Barkhausen noise

In principle, quantum mechanical effects can also produce Barkhausen noise. In this quantum version of Barkhausen noise, the spin flips occur as the particles tunnel through an energy barrier – a process known as quantum tunnelling – rather than by gaining enough energy to jump over it.

In the new work, which is detailed in PNAS, researchers led by Thomas Rosenbaum of the California Institute of Technology (Caltech) and Philip Stamp at the University of British Columbia (UBC) observed quantum Barkhausen noise in a crystalline quantum magnet cooled to temperatures near absolute zero (- 273 °C). Like Barkhausen in 1919, their detection relied on wrapping a coil around their sample. But instead of hooking the coil up to a loudspeaker, they measured jumps in its voltage as the electron spins flipped orientations. When groups of spins in different domains flipped, Barkhausen noise appeared as a series of voltage spikes.

The Caltech/UBC researchers attribute these spikes to quantum effects because they are not affected by a 600% increase in temperature. “If they were, then we would be in the classical, thermally activated regime,” Stamp says.

Rosenbaum adds that applying a magnetic field transverse to the axis of the spins has “profound effects” on the response, with the field acting like a quantum “knob” for the material. This, he says, is further evidence for the novel quantum nature of the Barkhausen noise. “Classical Barkhausen noise in magnetic systems has been known for over 100 years, but quantum Barkhausen noise, where domain walls tunnel through barriers rather than being thermally activated over them, has not, to the best of our knowledge, been seen before,” he says.

Co-tunnelling effects

Intriguingly, the researchers observed spin flips being driven by groups of tunnelling electrons interacting with each other. The mechanism for this “fascinating” co-tunnelling, they say, involves sections of domain walls known as plaquettes interacting with each other through long-range dipolar forces. These interactions produce correlations between different segments of the same wall, and they also nucleate avalanches on different domain walls simultaneously. The result is a mass cooperative tunnelling event that Stamp and Rosenbaum liken to a crowd of people behaving as a single unit.

“While dipolar forces have been observed to affect the dynamics of the motion of a single wall and drive self-organized criticality, in LiHoxY1-xF4, long-range interactions cause correlations not just between different segments of the same wall, but actually nucleate avalanches on different domain walls simultaneously,” Rosenbaum says.

The result can only be explained as a cooperative macroscopic quantum (tunnelling phenomenon, Stamp says. “This is the first example ever seen in nature of a very large-scale cooperative quantum phenomenon, on the scale of 1015 spins (that is, a thousand billion billion),” he tells Physics World. “This is huge and is by far the largest macroscopic quantum phenomenon ever seen in the lab.”

Advanced detection skills

Even with billions of spins cascading at once, the researchers say the voltage signals they observed are very small. Indeed, it took them some time to develop the detection ability necessary to accumulate statistically significant data. On the theory side, they had to develop a new approach to investigate magnetic avalanches that had not been formulated previously.

They now hope to apply their technique to systems other than magnetic materials to find out whether such cooperative macroscopic quantum phenomena exist elsewhere.

Purpose-Led Publishing:  Antonia Seymour outlines the role of not-for-profit publishers

Purpose-Led Publishing is a coalition of three not-for-profit scientific publishers: IOP Publishing, AIP Publishing and the American Physical Society.

The coalition launched earlier this year, and its members have promised that they will continue to reinvest 100% of their funds back into science. Members have also pledged to “publish only the content that genuinely adds to scientific knowledge,” and have also promised to “put research integrity ahead of profit”.

This episode of the Physics World Weekly podcast features an interview with Antonia Seymour, who is chief executive of IOP Publishing. She played an important role in the creation of Purpose-Led Publishing and argues that scientists, science and society all benefit when physicists publish in not-for-profit journals.

Audio engagement

Also in this episode, we meet Corragh-May White who is surveying podcast listeners to try to work out the best ways for using audio to get people engaged in science. She is doing a master’s degree in science communication at the University of the West of England and is making short science podcasts in different styles for her subjects to listen to.

If you would like to take part in the 20-minute survey, you can contact White at Corragh2.White@live.uwe.ac.uk for more information.

US Electron-Ion Collider hits construction milestone

The US Department of Energy has given the green light for the next stage of the Electron-Ion Collider (EIC). Known as “critical decision 3A”, the move allows officials to purchase “long-lead procurements” such as equipment, services and materials before assembling the collider can begin.

The EIC, costing between $1.7bn and $2.8bn, will be built at Brookhaven National Laboratory in Long Island, New York. This will involve the lab revamping its existing 3.8 km-long Relativistic Heavy Ion Collider accelerator complex that collides heavy nuclei such as gold and copper to produce a quark–gluon plasma.

A major part of the upgrade will involve adding an electron ring so that the EIC consists of two intersecting accelerators – one producing an intense beam of electrons and the other a high-energy beam of protons or heavier atomic nuclei.

Each high-luminosity beam will be steered into head-on collisions with the particles produced providing clues to the internal nature of protons and their components.

“Passing this milestone and getting these procurements under way will help us achieve our ultimate goal of efficiently delivering a unique high-energy, high-luminosity polarized beam electron–ion collider that will be one of the most challenging and exciting accelerator complexes ever built,” notes EIC project director Jim Yeck. Construction is expected to start in 2026 with first experiments beginning in the first half of the next decade.

Meanwhile, the UK has said it will provide £58.4m ($73.8) to develop new detector and accelerator infrastructure for the EIC. The money comes as part of a £473m package of spending by the UK Research and Innovation (UKRI) Infrastructure Fund.

This money also includes £125m for a new diffraction and imaging electron microscopy facility at the Science and Technology Facilities Council’s Daresbury Laboratory. The facility, known as Relativistic Ultrafast Electron Diffraction and Imaging, will be the world’s most powerful microscope for imaging dynamics being able to study biological and chemical processes in “real time” at the femtosecond timescale.

Excitation of thorium-229 brings a working nuclear clock closer

A nuclear clock based on thorium-229 is one step closer now that researchers in Germany and Austria have shown that they can put nuclei of the isotope into a low-lying metastable state.

The exceptionally low 8 eV excitation energy corresponds to light in the vacuum ultraviolet, which can be generated by a laser. As a result, the transition could be used to create an accurate clock. Such a nuclear clock would, in principle, be more stable than existing atomic clocks because it would be much less susceptible to environmental noise. A nuclear clock could also be more practical because unlike an atomic clock, it could be a completely solid-state device.

However, this high accuracy and stability makes it difficult to observe and excite this transition because the light involved has a very narrow bandwidth and can be difficult to find. Indeed, it was only last year that researchers at CERN made the first direct measurement of photons from the transition, whereas the existence of the transition was confirmed in 2016.

Lower-cost laser

Thorium-229 is not the only nuclei that is being explored for use in a nuclear clock. Work on scandium-45 is further advanced, but this nucleus has a transition energy of 12.4 keV. This means that it would have to be paired with an X-ray laser to create a clock – and such lasers are large and expensive.

The new research was done by a collaboration of physicists from the Federal Physical and Technical Institute in Braunschweig, Germany, and the Vienna University of Technology in Austria. One of the team members is Ekkehard Peik, who came up with the idea of a nuclear clock 20 years ago.

Nuclear and atomic clocks work in much the same way. The transition of interest is excited by a laser (or maser) and the emitted light is sent to a feedback control mechanism that locks the frequency of the laser to the frequency of the transition. The extremely stable frequency of the laser light is the output of the clock.

The first clocks (and the current international time standard) use microwaves and caesium atoms, while best clocks today (called optical clocks) use light and atoms including strontium and ytterbium. Optical atomic clocks are so reliable that even after billions of years they would be out by just a few milliseconds.

Smaller is better

A large part of this performance is down to how the atoms are trapped and shielded from electromagnetic noise – which is a significant experimental challenge. In contrast, the nuclei are much smaller than atoms, which means that have much less interaction with electromagnetic noise. Indeed, instead of being isolated in a trap, clock nuclei could be embedded in a solid material. This would greatly simplify clock design.

In their experiment, the Austrian and German physicists doped calcium fluoride crystals with thorium-229 nuclei, which they got from a nuclear disarmament program in the US. The thorium-doped crystals were only a few millimetres across. They then used a tabletop laser to excite the thorium-229 to the desired low-energy nuclear state. This excitation was confirmed using a technique called resonance fluorescence, which involves detecting the photons that are emitted when the excited nuclei decay back to the ground state.

“This research is a very important step in the development of a nuclear clock,” says Piet Van Duppen of KU Leuven in Belgium, who works on nuclear clocks. “It proves that this development is technically possible, also for solid-state clocks. We assumed that laser excitation of the nuclear transition would be detectable in optical traps, but until now there were doubts if this was also the case in solid-state crystals.”

Potential applications for nuclear clocks of the future lie mainly in the detection of tiny time variations that could point to new physics beyond the Standard Model. This could include variations in the fundamental forces and constants. In particular, the clocks could reveal new physics by looking for variations in the nuclear force, which binds nuclei together and ultimately defines the clock frequency. As a result, nuclear clocks could shed light on some of the big mysteries in physics such as the nature of dark matter,

The clocks could also be used to measure time dilation due to differences in the gravitational pull of the Earth. This could be done using miniature and highly mobile nuclear clocks on chips that could be easily moved around to different locations. This would be very useful for doing geodesy and geological studies.

A paper describing the research has been accepted for publication in Physical Review Letters.

Peter Higgs: the story behind his interview with Physics World

I can’t really claim to have known the Nobel-prize-winning physicist Peter Higgs, but after the sad news emerged last week that he had died on 8 April at the age of 94, I was immediately reminded of my one brush with him.

An obituary in the Times described Higgs as “warm, funny and engaging” – and that was exactly the person I encountered when we met at the offices of IOP Publishing in Bristol, UK, in May 2012.

Higgs, then 82, had come to Bristol to speak at the city’s Festival of Ideas and to open the “Dirac–Higgs Science Centre” at Cotham School, where he spent five years as a child while his father was stationed in the city as an engineer for the BBC during the Second World War.

As the centre’s name suggests, Higgs wasn’t the only Nobel laureate to have studied at the school. In its earlier guise as the Merchant Venturers’ Technical College, it had also been attended by Paul Dirac, whose name the young Higgs used to see on Cotham’s honours boards.

“Concerning the propopsed interview with Physics World, I would be happy to do this, but Dr Durrani should be warned that I am a complete layman with regard to the LHC experiments.”

Peter Higgs

Jo Allen, currently head of media at IOP Publishing, which publishes Physics World, had got wind of his impending visit and decided to ask Higgs if he wanted to visit our offices and be interviewed by Physics World.

Rumours were circulating at the time that physicists at the Large Hadron Collider (LHC) at CERN were about to announce the discovery of the boson that Higgs had predicted almost five decades earlier – and we knew that an interview with Higgs would be eagerly lapped up.

Higgs, who had long since retired, famously avoided e-mails and rarely – if ever – used his phone. So more in hope than expectation, Allen – who was then head of marketing at IOP Publishing – posted a letter to him at his flat in Edinburgh. A few weeks later, she was thrilled to receive a two-page hand-written letter from him in return. Dated 1 May 2012, it said he was “delighted to accept” our invitation.

“On May 16, I’m committed to being at the Bristol Festival of Ideas,” Higgs wrote. “However, I shall not be returning to Edinburgh until the evening of May 17, because the particle physicists at the University of Bristol have persuaded me to give a talk there (title “My Life as a Boson”) at 4 p.m.”

Higgs added that he was planning to have a coffee at 10 a.m. on 17 May with an old colleague Trevor Priest from Exeter University, who had also been a pupil at Cotham. “So I should be free from about 11 o’clock,” Higgs concluded.

While saying he was “happy” to do an interview with Physics World, he insisted with trademark modesty that “Dr Durrani should be warned that I am a complete layman with regard to the LHC experiments”. Higgs said he would therefore “stick to the CERN line on what constitutes a discovery!”

Two photos of a letter from Peter Higgs to Jo Allen dated 1 May 2012

In the interview, which you can listen to here, Higgs proved to be charming, open and friendly. He talked about how the Higgs boson came to be so named, what his research interests were, why he eschewed religion – and what he thought the best analogy anyone had ever made for the Higgs boson.

Keen to insist that others should get the credit for their theoretical contributions, he was constantly at pains to refer to the “so-called” Higgs boson. Not one to stand on ceremony, he even remained unfazed when IOP Publishing’s tea lady accidentally barged into the room with her trolley, unaware an interview was going on.

After the interview, we took some photographs with Higgs. He then accepted an offer from Allen to drive him up to Cotham School – on the proviso that no physics was to be discussed during the short journey through Bristol.

Less than two months later, CERN announced that the Higgs boson had been discovered. The following year, Higgs was awarded the Nobel Prize for Physics, jointly with François Englert.

I never saw Higgs again. Only now, following his death, do I realize how lucky I am to have met him, however brief that encounter was. And what a lovely hand-written letter to have as a reminder.

Shrimp-inspired nanoclusters enable multifunctional artificial vision systems

Mantis shrimp visual system and artificial nanocluster photoreceptor

Advances in artificial intelligence and autonomous systems have triggered increasing interest in artificial vision systems (AVSs) in recent years. Artificial vision allows machines to “see”, interpret and react to the world around them, much like humans do when we respond to a situation that we can see changing – a car braking in front of us when driving, for example.

These “machine eyes” capture images from the world around them using cameras and sensors. Complex computing algorithms then process these images, enabling the machines to analyse their surroundings in real time and provide a response to any changes or threats (depending upon their intended application).

AVSs have been used in many areas, including facial recognition, autonomous vehicles and visual prosthetics (artificial eyes). AVSs for autonomous vehicles and high-tech applications have become well established. However, the complex nature of the human body makes visual prosthetics more challenging, because state-of-the-art AVSs do not possess the same level of multifunctionality and self-regulation as the biological counterparts that they mimic.

Many AVSs in use today require several components to function – there aren’t any photoreceptive devices that can perform multiple functions. This means that a lot of the designs are more complex than they should be, making them less commercially feasible and harder to manufacture. Hanlin Wang, Yunqi Liu and colleagues at the Chinese Academy of Sciences are now using nanoclusters to create multifunctional photoreceptors for biological prosthetics, reporting their findings in Nature Communications.

Inspired by the mantis shrimp

The visual system of a mantis shrimp uses 16 photoreceptors to perform multiple tasks simultaneously, including colour recognition, adaptative vision and perception of circularly polarized light. With nature often able to do things that scientists could only dream of achieving on a synthetic level, biomimicry has become a popular approach. And as mantis shrimps have many desirable traits in their natural photoreceptors, researchers have attempted to mimic their properties artificially using nanoclusters.

Nanoclusters are metal atoms that are attached to protective ligands. This is a tailorable approach that gives rise to tuneable physical properties, such as discrete energy levels and sizable bandgaps due to quantum size effects. Nanoclusters also offer excellent photon-to-electron conversion, making them a promising approach for creating artificial photoreceptor devices.

“Nanoclusters are considered to be the next-generation materials for the continuation of Moore’s Law,” Wang tells Physics World. “However, basic scientific issues such as reproducible fabrication of nanocluster-based devices and photoelectric behaviour have remained obscure and unexplored.”

An artificial nanocluster photoreceptor

Inspired by the mantis shrimp, Wang and colleagues created nanocluster photoreceptors and used them as compact, multi-task vision hardware for biological AVSs. “In this research, we present nanocluster-embedded artificial photoreceptors that combine the capability of photoadaptation and circular polarized light vision,” Wang explains.

To create the AVS, the team fabricated a wafer-scale nanocluster photoreceptor array based on a heterostructure of chiral silver nanoclusters and an organic semiconductor (pentacene). The core–shell nature of the nanoclusters allows them to act as an in-sensor charge reservoir to tune the conductance levels of the artificial photoreceptors through a light valve mechanism. This allows the photoreceptor system to determine both the wavelength and intensity of incident photons.

When interfaced with the organic semiconductor material on the array, a ligand-assisted charge transfer process takes place at the nanocluster interface. The protective ligands in the core–shell structure provide a transduction pathway that links the nanoclusters to the organic semiconductor. This femtosecond-scale process facilitates both spectral-dependent visual adaptation and circular polarization recognition.

“We have addressed the wafer-scale fabrication of a uniform interface between a nanocluster film and organic semiconductors, providing a fundamental for high-density integration of artificial photoreceptors with nanoscale footprints,” says Wang.

The interface between the nanocluster and the organic semiconductor provides the adaptive vision, enabling multiple functions to be achieved with tuneable kinetics. Additionally, circular polarization information can be obtained due to the nanoclusters being chiral. As such, the team has developed nanoclusters that combine colour vision, photoadaptation and circular polarization vision into a single photodetector system.

This ability to combine multiple vision functions into a single system for biological recognition applications is a difficult feat to achieve, with previous approaches having had to rely on multiple components to do the same job as this single opto-electronic system. The team’s approach could help to build simpler and more robust vision hardware for neuromorphic devices and biological vision-related AI hardware.

“Artificial nanocluster photoreceptors perform all-in-one multiple visual functions into a single unit cell,” says Hanlin. “Among them, photoadaptation can be triggered and performed within 0.45 s, with its accuracy reaching 99.75%. This is the highest performance compared to the existing literature and outperforms human visual systems – which is about 1 min”.

Next, the researchers aim to increase photoadaptation switching rates beyond 0.45 s at the nanocluster/organic semiconductor interface. “In the future, we will investigate the characteristics of charge transfer dynamics and produce faster nanocluster-embedded neuromorphic systems,” Wang concludes.

Can thinking like a scientist help us tackle societal issues?

Illustration of four heads with scientific thoughts in bubbles

Everyone makes mistakes. Long before he shared the 2011 Nobel Prize for Physics for his role in demonstrating the accelerating expansion of the universe, Saul Perlmutter was a postdoc in a team of researchers who thought they had found the first evidence of an exoplanet. All their measurements pointed to a planet orbiting a pulsar – until they went back to the observatory the following year and realized that what they had measured was, in fact, background noise from another instrument in the building. They quickly retracted their original paper.

Perlmutter, now a researcher at the University of California, Berkeley, shares this anecdote in Third Millennium Thinking: Creating Sense in a World of Nonsense, which he co-wrote with the philosopher John Campbell, also at Berkeley, and psychologist Robert MacCoun, who is at Stanford University. The book describes the array of “thinking tools” that scientists use and encourages the reader to apply these in non-scientific contexts.

The authors emphasize that probabilistic thinking and other scientific thinking tools can be applied to many decisions in daily life

For example, one of the book’s five sections is devoted to the concept of probabilistic thinking: the way that scientists tend to be cautious about the statements they make because there is always a chance that they are incorrect. Perlmutter’s exoplanet-that-never-was is a case study of scientists getting it wrong and admitting their mistakes, but the authors emphasize that probabilistic thinking and other scientific thinking tools can be applied to many decisions in daily life. Throughout the book, they suggest ways in which you can evaluate facts to help you decide, say, which medical treatment to take or which local policies to support.

Many scientific standards and habits are covered, including correlation and causation; false positives and false negatives; and statistical and systematic uncertainty. Short examples apply these methods to day-to-day situations, from the mundane to the political. To illustrate statistical uncertainty, the authors use the example of a traveller weighing themselves every day in a different hotel. Each hotel’s bathroom scales will be a little bit off (statistical uncertainty), but the average will be their real weight.

The book also highlights biases that scientists and others may have and explains why it is important to validate statements. One example that illustrates this is the 1988 debunking of a French lab’s claim that water holds “memories” of molecules it has encountered before. John Maddox, the then editor of Nature, sent a team to the lab to rerun a double-anonymous experiment. The investigative team wasn’t able to repeat the findings and realized that the lab had ignored the many cases in which its experiments hadn’t worked. Part of the debunking team was magician James Randi, which also shows that you don’t have to be an expert in a field to see when someone has failed to account for bias and error.

Because there are many topics to cover, the examples are often short and some are never mentioned again after the narrative moves on to the next thinking tool. So while the book covers a broad range of ideas and applications, it can be difficult to stay focused. In the introduction, the authors say that you’re free to skim chapters about concepts that you already know, but where does that leave a reader for whom most of this is new?

Throughout Third Millennium Thinking, the authors say they want to give the reader the tools and confidence to ask the right questions and interpret claims. The book started as a successful course at Berkeley where Perlmutter, Campbell, MacCoun and others have been teaching students for the last decade how to apply scientific thinking to decisions about personal life or societal issues. But a reader may wonder how much of a difference they can make with these new ways of thinking.

It’s not until the last few chapters that this all falls into place. Here, the authors step back and acknowledge that we are part of a larger discussion. In this part of the book, we learn how people in groups can either combine forces or lead each other astray. The authors recount an example from their course in which they asked their students to guess how many people in their electoral district voted for a particular candidate. The average guess was close to the correct answer, but if the students discussed their guesses with each other, their estimates became less accurate.

I would have liked to be more aware of this bigger picture earlier in the book. The way that the content is organized reminded me of an academic paper, with methods in the middle and the discussion left until the end. However, it was certainly a much more entertaining read than most papers. Third Millennium Thinking is a kind of self-help book for society, and it will appeal to anyone who loves to think about thinking.

  • 2024 Hachette 320pp £22/$38.00hb

Adjuvant breast radiotherapy: KUH unlocks the clinical upsides of tangential VMAT

The radiation oncology department at Kuopio University Hospital (KUH) in eastern Finland has, for more than a decade, been treating the overwhelming majority (>98%) of its cancer patients, across diverse disease indications, using a proven combination of volumetric modulated-arc therapy (VMAT) plus daily low-dose cone-beam CT for image guidance. Zoom in a little further and it’s evident that an innovative variation on the VMAT theme – known as tangential VMAT (tVMAT) – is similarly established as the go-to treatment modality for adjuvant breast radiotherapy at KUH.

That reliance on tVMAT, which employs beam angles tangential (rather than perpendicular) to the curvature of the chest wall, is rooted in clinical upsides along multiple coordinates. Those advantages include highly conformal dose distributions for enhanced coverage of the target volume; reduced collateral damage to normal healthy tissues and adjacent organs at risk (OARs); as well as improved treatment delivery efficiency – think streamlined treatment times and lower integral dose to the rest of the body – compared with fixed-gantry intensity-modulated radiotherapy (IMRT).

Enabling technologies, clinical efficacy

If that’s the headline, what of the back-story? The pivot to a tVMAT workflow for breast radiotherapy began in 2013, when the KUH radiation oncology team took delivery of three Elekta Infinity linacs, simultaneously installing Elekta’s Monaco treatment planning system (six workstations). The KUH treatment suite also includes an Accuray CyberKnife machine (for stereotactic radiosurgery and stereotactic body radiotherapy) and a Flexitron brachytherapy unit (used mainly for gynaecological cancers).

With an addressable regional population of 250,000, the KUH radiotherapy programme sees around 1500 new patients each year, with adjuvant radiotherapy for breast cancer comprising around one-fifth of the departmental caseload. Prior to the roll-out of the Elekta linac portfolio, KUH performed breast irradiation using a 3D conformal radiotherapy (3D CRT) field-in-field technique (with planar MV imaging for image guidance integrated on the treatment machine).

The use of 3D CRT, however, is not without its problems when it comes to whole-breast irradiation (WBI). “With the field-in-field technique, there were planning limitations for WBI related to hot and cold spots in the planning target volume [PTV],” explains Jan Seppälä, chief physicist at KUH, where he heads up a team of six medical physicists. “In some cases,” he adds, “target coverage was also compromised due to heart or lung dose constraints.”

Fast-forward and it’s clear that the wholesale shift to tVMAT with daily cone-beam CT imaging has been a game-changer for adjuvant breast radiotherapy at KUH. Although the clinical and workflow benefits of conventional VMAT techniques are also accrued across prostate, head-and-neck, lung and other common disease indications, Seppälä and colleagues have made breast-cancer treatment a long-term area of study when building the evidence base for VMAT’s clinical efficacy.

“We have found that, with proper optimization constraints and beam set-up in the Monaco treatment planning system, tVMAT can reduce doses to the heart, coronary arteries and ipsilateral lung,” Seppälä explains. “The technique also enhances dose distributions greatly – reducing hotspots, improving target-volume dose coverage, while avoiding high-dose irradiation of healthy tissue as well as a low dose bath.” All of which translates into fewer reported side-effects, including breast fibrosis, changes in breast appearance, and late pulmonary and cardiovascular complications.

Jan Seppälä

Operationally, the total treatment time for breast tVMAT – including patient set-up, cone-beam CT imaging, image matching and treatment delivery – is approximately 10 minutes without breath-hold and about 15 minutes with breath-hold. The average beam-on time is less than two minutes.

“We use daily cone-beam CT image guidance for every patient, with the imaging dose optimized to be as low as possible in each case,” notes Seppälä. The cone-beam CT highlights any breast deformations or anatomical changes during the treatment course, allowing the team to replan if there are large [>1 cm] systematic changes on the patient surface likely to affect the dose distributions.

It’s all about outcomes

Meanwhile, it’s clear that toxicity and cosmetic outcomes following breast radiotherapy have improved greatly at KUH over the past decade – evidenced in a small-scale study by Seppälä’s team and colleagues at the University of Eastern Finland. Their data, featured in a poster presentation at last year’s ESTRO Annual Meeting, provide a comparative toxicity analysis of 239 left- or right-sided breast-cancer patients, with one cohort treated with tVMAT (in 2018) and the other cohort treated with 3D CRT (in 2011).

In summary, the patients treated in 2018 with the tVMAT technique exhibited less acute toxicities – redness of skin, dermatitis and symptoms of hypoesthesia (numbness) – versus the patients treated in 2011 with 3D CRT. Late overall toxicity was also lower, and the late cosmetic results better, in the 2018 patient group. “With tVMAT,” says Seppälä, “we have much less skin toxicity than we used to have with previous 3D CRT techniques. What we are still lacking, however, is the systematic and granular capture of patient-reported outcomes or daily images of the patient’s skin after each fraction.”

For Seppälä, comprehensive analysis of those patient-reported quality-of-life metrics is the “missing piece of the jigsaw” – and, ultimately, fundamental to continuous improvement of the tVMAT treatment programme at KUH. A case in point is the ongoing shift to ultra-hypofractionation treatment schemes in breast radiotherapy, with some KUH patients now receiving as few as five fractions (x5.2 Gy) as opposed to 15 (x2.67 Gy) fractions per the norm to date.

To support this effort, work is under way to evaluate the clinical implementation of Elekta ONE Patient Companion, powered by Kaiku Health, a system providing patient-reported outcomes monitoring and intelligent symptom-tracking for cancer clinics.  “This software tool would enable us to capture real-world outcome data directly from patients,” says Seppälä. “Those data are key for quantifying success, such as the correlation of cosmetic outcomes with a change in fractionation scheme.”

Meanwhile, machine-learning innovation is another priority on KUH’s tVMAT development roadmap, with the medical physics team in the process of implementing AI-based dose predictions to inform treatment planning on an individualized patient basis. The driver here is the push for more unified, standardized dose distributions as well as workflow efficiencies to streamline patient throughput.

“We are doing some treatment planning automation – mainly on the optimization side,” Seppälä concludes. “The challenge is to push the optimization system to its limit to ensure low doses to critical structures like the heart and ipsilateral lung. By doing so, we can deliver at-scale enhancements to the overall quality and consistency of our treatment planning in Monaco.”

Read more

Elekta Unity: CMM innovation opens the way to real-time tracking, online plan adaptation

Mauro Paternostro: a vision of the quantum landscape

We are in the midst of a quantum renaissance, with researchers in academia and industry all vying to “win” the quantum computing race. The quantum marketplace is booming, with scores of companies, large and small alike, investing in this technology, backed by huge government funding across the globe.

Mauro Paternostro, a quantum physicist at the University of Palermo and Queen’s University Belfast, is an expert in quantum information processing and quantum technology. Working on the foundations of the subject, his team is doing pioneering research in cavity optomechanics, quantum communication and beyond. He is also editor in chief of the IOP Publishing journal Quantum Science and Technology.

In this wide-ranging interview, Paternostro talks to Tushna Commissariat about his views on the quantum landscape – from the “four pillars” of quantum technology and hybrid architectures to the promising marriage between quantum tech and artificial intelligence (AI). Paternostro also underlines the need for continued government funding to realize the true potential of this world-changing technology.

We’ve seen the quantum bubble blow up over the last decade, but what are the potential advantages and risks of the exponential expansion in quantum technology companies and funding around the world?

Overall, the picture is very positive. Quantum information processing needed a boost from industry, as firms can push for the more pragmatic developments that the field needs. The perspective that industry offers is helping to shape quantum technologies in a more focused manner, when it comes to overall goals. The budding, exploding market – be it in industry or academia – is great.

But, as you point out, there has been a swift growth. And while that is mostly a good thing, there is also a little bit of worry that we might be creating create a big bubble that will burst sooner rather than later. So I think it’s a matter of control – we do need to somewhat restrain ourselves, while allowing the research area to grow organically.

I am slightly concerned with the number of small companies that all seem to be developing their own quantum software. Their products have very little to do with true quantum algorithms and are typically classical optimization solutions – which have their own merits. But they are not necessarily what I would call a quantum framework.

On the other hand, some spin-off companies are more oriented towards the implementation of quantum processing platforms, such as quantum sensors. These are really interesting, as it’s not just quantum computation at play, but also other physical laws.

There are four pillars underpinning the developments of quantum technology: quantum computing; quantum simulation; quantum communication; and quantum sensing and metrology. And I would say that all four are developing in a very healthy way.

Quantum sensing seems to be one of the most advanced, together with communication thanks to the maturity of the technologies they can leverage. While the involvement of industry is beneficial and promising, we should be wary of the wild speculation and “inflation” that comes from trying to jump onto a fast bus, without having the full fare for the ride at hand.

And while I am often sceptical of smaller companies, you also sometimes get concerning news from the big players. For example, Chinese tech firm Alibaba had an interest in developing quantum computing platforms and solutions, until it suddenly decided to close its in-house quantum team at the end of last year, stating it would rather focus on being a leader in AI research.

Was this simply a business decision, or is Alibaba smelling something that we have not yet smelled? I guess we will have to wait and see. Overall, I think the future is bright and the involvement of industry is very good news.

There are a number of different quantum-computing technologies vying for top spot – from trapped ions and quantum dots to superconducting and photonic qubits. Which do you think is most likely to succeed?

I’m sort of an agnostic, in that I don’t believe that the first quantum device we build will be fully quantum. I know for some this is a controversial take, but it’s an opinion shared by many others in my field. What I think we will end up with is a hybrid architecture, where the best of high-performance computing (HPC) will interface with quantum-computing architectures.

Maybe these noisy intermediate-scale quantum (NISQ) architectures will be joined by a full-fledged HPC architecture that will boost their performance, or vice versa. The quantum resources put on the table by this sort of hybrid device will enhance the performance that current classical HPC can produce. I strongly believe in the feasibility of that sort of hybrid architecture – a fully quantum solution is still a long way from where we are now.

A silicon wafer covered with microchips

Also, I’m not entirely convinced that we will have the ability to manage the massive resources that would be needed to make full use of the gap in computational power that a quantum computer would offer. A medium-term goal aiming for this hybrid HPC quantum architecture will be a much more realistic – and potentially very fruitful architecture – to pursue. I’m mildly optimistic that something will come up in my lifetime.

You mentioned that quantum sensors are already being developed for a wide variety of applications including healthcare, construction and even gravity measurement. What’s new and exciting in that area?

Quantum sensors are developing amazing capabilities to investigate mechanisms that so far have been elusive. Essentially, these sensors help us to better detect the potential quantum effects of forces like gravity, which many researchers in the UK have an interest in pursuing. A substantial fraction of the experimental community is pursuing these goals – with the University of Birmingham’s quantum hub leading on this front.

I don’t think that anyone claims that there is a winning experimental platform to pursue – both cold atoms and optomechanics are some of the most promising ones in that respect. But the theoretical and experimental progress that this area has achieved is very interesting.

Sensors that can probe the fundamental nature of elusive physical mechanisms will, I believe, be a key development. And then there are other sensing devices, such as accelerometers or imagers that are already pretty well established. The UK’s National Quantum Technologies Programme has already made significant advances in that regard, and the technology is available and mature enough to have a real impact.

I think industries should heavily invest in this area because, alongside communication, sensing is at the forefront of the implementations of quantum technologies at this stage.

And what about quantum communication?

Quantum communication is probably the most concrete example where academic progress has been put to work, to the benefit of industry-led targets. It’s been an absolutely superb example of what we can achieve when these two components work together.

While the progress has been fantastic, there are also controversial aspects, especially when we consider the larger geopolitical implications of a global quantum network. The issue of communication and data security will become significant, so we must carefully consider the wider implications of these technological developments. Geopolitical boundaries are continually changing, and their aims are not always concurrent with scientific goals.

What are some key areas where AI and quantum technologies intersect? Where do they best help one another, and what are potential issues?

This is a very important question. Needless to say, the holy grail for both areas is very close – both AI and quantum computation are based on the development of new algorithms. One hears people talking about quantum machine learning (ML), or quantum AI, but that’s not what they really mean. They are not referring to specifically designed quantum algorithms for AI or ML problems. What they mean is the hybridization of classical machine learning or classical AI with quantum problems.

These solutions will depend on the field and the problem we are trying to tackle. But in general we are looking at classical techniques for processing data sets; optimizing problems; solving cost functions; and controlling, optimizing and manipulating quantum problems.

It’s very promising, as you’re putting together the best of the two worlds. From a theoretical point of view, the aim is to tackle questions at the general quantum-mechanical level that need to be addressed, and perhaps the larger and more complicated problems in terms of scale. We want to build tools at the algorithmic level that allow you to cope with the complexity of those problems in a certifiable and consolidated manner.

And the interesting thing is that experiments have started catching up with the theoretical developments. We already have a number of solutions, approaches and methodologies that have been developed in this hybrid scenario where ML and quantum information processing come together.

I hope these experiments are fully investigated in the next few years, and don’t get caught up if the AI and quantum bubble does burst. I doubt that would be the case though, because AI is here to stay, while ML is now an unmissable tool used by data analysts worldwide. If we have any ambition to scale up the complexity of the problems that we can and should tackle, then we must focus on developing these tools.

What new initiatives are going on in this area?

Earlier this year, UK Research and Innovation (UKRI) announced that it is funding nine new research hubs to “deliver revolutionary AI technologies” to tackle complex problems from healthcare to energy as well as 10 other studies to define “responsible AI”. I know that a number of these have a quantum component – especially in healthcare, where AI-based solutions are absolutely fundamental, but there may be quantum solutions as well.

So I’m very optimistic when it comes to the merger of AI and quantum tech, as long as the development of an AI framework is regulated. Right now, the European Commission is formulating the legal framework for its AI Act, which will address the risks that AI might pose, and the global role the EU hope to play in regulating the technology. Both the UK and the US have been working on similar frameworks for a while already, so we should have some global policy and regulation formulated, sooner rather than later.

As long as this development follows a regulated policy with solid framework, AI’s interactions with quantum technologies should create a useful two-way feedback mechanism that will help both fields grow significantly.   

When it comes to quantum-technology funding by governments across the global stage, what specific areas would you like to see further investment in?

My grants! But on a more serious note, government-level investment has been widespread and substantial for what is essentially still an emerging scientific field. Compared with some other areas that receive science funding, such as military or medical research, the amount of money that has been put on the plate is almost ridiculous – but it’s a very good thing for us of course. A benefit of this kind of government spending is that it forces us to form a community and come up with shared goals.

If we refer to the aforementioned four pillars, there is an underlying connection of fundamental physics and theoretical developments. Different countries have chosen one or more pillar to focus on, depending on their expertise and resources. The US is very focused on computation. The EU is more widespread and so the situation is more complex, but there is major investment in communications, as well as a growing interest in simulation, while a number of EU national strategies are also focused on sensing.

A computing laboratory with a quantum computer hanging from a metal frame and a scientist adjusting something at its base

The UK is also trying to cover the whole spectrum, but identifying some very well-defined topics, from imaging to computation, and from communication to sensing. There are countries like Finland that have a more experimental approach and are focused on superconducting architectures, as they already have huge facilities available. Singapore, on the other hand, is developing a very strong line of research in satellite-based quantum communication. For a small country, it has huge potential, in terms of both talent and resource.

So different countries have developed their own area of expertise, in an organic manner. And by doing so, we are all winning as a community – we are all benefiting from all the progress that has been made. Some baby steps, some more incremental steps, some huge quantum leaps.

I think it will be really important that governments, national and super national, realize that investment in quantum technologies should be sustained. It’s an area that needs continuous, unbroken support to deliver its lofty goals. And we, as the scientific community, must project a coherent picture with the very same set of goals, despite any differences we have. Only then will we be best placed to translate quantum technologies to life-changing realities.

As the new editor-in-chief of Quantum Science and Technology (QST), what’s your vision for the journal?

It’s a big honour, and I’m absolutely flattered, but it’s also a big endeavour, given the evolving landscape of quantum-related journals. What I want for the journal is to make sure that QST remains one of the preferred avenues for the submission of top-notch contributions. But I also want to help shape the journal’s manifesto and its goals.

My first priority as editor-in-chief has therefore been to set up an executive board that along with the support of the editorial board will shape the scope and mission of the journal in a clear manner. And that will then inform the way the journal will develop over the next few years, guided by the quantum research community. In terms of the scope, I would like to see more high-quality experimental updates that push the envelope of the implementation of quantum technologies.

IOP Publishing has a transformative agreement (TA) with your institution, in terms of open-access publishing. Can you tell me about that?

I think it has been a game-changing agreement as far as the publication of our output is concerned. With the stringent criteria that the research councils have put on outputs supported by grants – from the Engineering and Physical Sciences Research Council (EPSRC) for instance – and the need for them to be fully accessible, and data to be fully available to the community, having a TA that guarantees open access is what we need. It’s great to have the peace of mind that IOP Publishing is a viable avenue for where my EPSRC-compliant outputs can be published.

Apart from funding compliance, the IOPP agreement removes the administrative burden of dealing with invoices for the article publication charges (APCs) which is a big relief for the scientists. I have been advocating for broadening the initiative – by establishing similar agreements with other publishing companies – but also making sure that this is not a one-off experiment that fades away in the next year or so. We should make it systemic to the way institutions across not only UK, but as far as I’m concerned, Europe are involved. It should be encapsulated right from the start, in the way higher-education institutions and research institutes are operating. Making sure there is a synergy between publishing companies and universities or research institutes is crucial.

Copyright © 2024 by IOP Publishing Ltd and individual contributors