Skip to main content

Flat bands appear in buckled graphene superlattices

Simulated mountain and valley landscape

An international team led by researchers at Rutgers University in the US has found a way to create “flat” electronic bands – that is, electron states in which there is no relationship between the electrons’ energy and velocity – in graphene simply by causing the material to buckle. This new strategy could be used to produce so-called “superlattice” systems that serve as platforms for exploring the collective behaviour of electrons in strongly interacting quantum systems. Such behaviour is known to be linked to high-temperature superconductivity, but a complete understanding is still lacking.

Flat bands are especially interesting for physicists because electrons become “dispersionless” in these bands – that is, their kinetic energy is suppressed. As the electrons slow down almost to a halt, their effective mass approaches infinity, leading to exotic topological phenomena as well as strongly correlated states of matter associated with high-temperature superconductivity, magnetism and other quantum properties of solids.

A fine-tuning challenge

Flat bands are, however, difficult to engineer, and researchers have only observed them in a handful of physical systems. An example is twisted bilayer graphene, which is created by placing two sheets of graphene on top of each other and slightly misaligning them. Under these conditions, the atoms in the graphene sheets form a quasi-periodic moiré pattern with a period that is determined by the relative twist between the sheets’ crystallographic axes, rather than the spacing between individual atoms.

The result is a “superlattice” in which the material’s unit cell (that is, the simple repetition of carbon atoms in its crystal structure) expands to a huge extent – as if the 2D crystal were being stretched 100 times in all directions. This stretching dramatically changes the material’s interactions and properties. Notably, it undergoes a transition from an insulator to a superconductor at a “magic” twist angle of 1.1° and a temperature of 1.7 K.

“Magic angle” graphene has been studied extensively since its discovery in 2018. However, because the “magic” effect disappears at slightly larger or smaller twists, very accurate fine-tuning of the material is required to achieve the desired electronic band structure.

Pseudo-magnetic fields

A team led by Eva Andrei of the Department of Physics and Astronomy at Rutgers has now developed an alternative means of producing flat electronic bands. She and her colleagues began by placing graphene on an atomically flat substrate of niobium diselenide or hexagonal boron nitride. Using scanning tunnelling microscope topography and computer simulations, they found that the graphene sheet buckled when cooled to 4 degrees above absolute zero. This buckling is driven by compressive strain within the graphene sample, which develops when ridges that formed during the sample’s fabrication collapse as it cools.

As the graphene buckles, a “mountain and valley” landscape forms that electrons in the material experience as pseudo-magnetic fields. “These fields are an electronic illusion, but they act as real magnetic fields,” Andrei explains. The result, she says, is a dramatic change in the material’s electronic properties – including the emergence of flat bands.

Unlike earlier realizations of a pseudo-magnetic fields that were mostly local in their extent, the buckling transition observed in this work produces a global change in the electronic structure of graphene, with a sequence of flat bands spread throughout the material.

According to the team, the new technique could thus become a general strategy for creating other superlattice systems and using them to explore interaction phenomena characteristic of flat bands.

The researchers, who report their work in Nature, say they would now like to develop ways of engineering buckled 2D materials with novel electronic and mechanical properties for use in applications such as nanorobotics and quantum computing.

Molecule’s electronic structure is simulated on a quantum computer

Simulating chemical processes is one of the most promising applications of quantum computers, but problems with noise have prevented nascent quantum systems from outperforming conventional computers on such tasks. Now, researchers at Google have taken a major step towards this goal by using the most powerful quantum computer yet built to successfully implement a protocol for calculating the electronic structure of a molecule. The results may form a blueprint for complex, useful calculations on quantum computers affected by noise.

In October 2019, Google announced to great fanfare that its 53-qubit Sycamore computer had achieved quantum advantage. This means that a quantum computer can solve at least one problem much faster than any conventional supercomputer. However, Google researchers openly acknowledged that the problem Sycamore solved (sampling the outcome of a random quantum circuit) is easy for a quantum computer but difficult for a conventional supercomputer — and had little practical use.

What researchers would really like to do is use quantum computers to solve useful problems more effectively than possible with conventional computers: “Sycamore is extremely programmable and, in principle, you really can run any algorithm on it…In this sense, it’s a universal quantum computer,” explains team member Ryan Babbush of Google Research, “However, there’s a heavy caveat: there’s still noise affecting the device and as a result we’re still limited in the size of circuit we can implement.” Such noise, which results from classical sources such as thermal interference, can destroy the fragile superpositions crucial to quantum computation: “We can implement a completely universal circuit before the noise catches up and eventually destroys the computation,” says Babbush.

Hartree-Fock procedure

In the new research, the team used Sycamore to implement the Hartree-Fock procedure – a well-established method for calculating the electronic structure of molecular systems – and applied it to the isomerization of diazene.  This is significantly more complex than previous simulations on quantum computers, increasing the maximum number of qubits used from six to 12.

Each successive qubit brings additional potential for noise: “You need to perform some type of error mitigation,” says Google Research’s Nick Rubin: “The extreme way – and this is something that the Google team is building towards – is to build an error corrected quantum computer, which involves turning a quantum computation into a digital quantum computation and correcting any errors that occur along the way”.

In the absence of this, however, Rubin developed a mathematical error mitigation technique that allowed noise to be identified and discarded. Using this technique and others, Rubin explains, the researchers could “lower the error rates of Sycamore through calibration and then apply algorithmic error mitigation to see the – in this case – chemistry at high fidelity.” Their results matched those from a conventional computer.

More complex simulations

Ironically, the structures predicted by the simple Hartree-Fock procedure for this molecule do not agree with measurements in the lab, but Babbush explains that the “notable deficiencies” of the Hartree-Fock procedure for structure prediction are irrelevant. “We certainly do want to build more complex simulations on top of it,” he says, “but even those will use the sub-routines we’ve developed here as a stepping stone.” Whether or not, with further improvements, it will prove possible to solve classically intractable problems in quantum chemistry on a so-called “noisy intermediate-scale quantum computer” (NISQ) using quantum error mitigation remains unknown: “Error mitigation will only take you so far,” says Babbush. “At some point all you’re going to be getting is noise, so it doesn’t matter whether you can tell whether it’s noise or not. To get past that point, you really need error correction.”

“This work pushes the needle in quantum computing for chemistry,” says quantum chemist and computer scientist Alán Aspuru-Guzik of the University of Toronto in Canada. “It shows in an honest way what quantum computers can do today in a device. Compare this to what they could do a couple of years ago, and the progress is extraordinary. It is also worth saying that there is still a large open space for hardware improvements and clever algorithmic tricks.”

“We’re in this era of NISQ,” says quantum information scientist Barry Sanders at Canada’s University of Calgary. “I appreciate what they’re doing now, which is to say ‘let’s forget about universal quantum computing and let’s just push our technology to answer relevant problems.”

The research is described in Science.

 

Dust buster for the Moon, curious pattern appears on Canadian beach, borderline collider

The Moon is a dusty place, and this could be a real problem for future colonists. “Lunar dust sticks to all kinds of surfaces — spacesuits, solar panels, helmets — and it can damage equipment,” explains Xu Wang, who is a research associate in the Laboratory for Atmospheric and Space Physics at Colorado University Boulder.

To solve this problem, Wang and colleagues in Boulder developed an electron gun that could be used to disperse lunar dust. Why not simply use a feather duster? Moon dust is sticky because it acquires electric charge by being bombarded by radiation from the Sun. Firing electrons at the dust particles gives them even more charge, causing the particles to repel each other and disperse.

The team tested their lunar “dust buster” in a vacuum chamber using dust particles similar to those found on the Moon. “It literally jumps off,” says Benjamin Farr, who completed the work as an undergraduate student in physics at Boulder. You can read more about the research in CU Boulder Today.

Folks on Lina Island off the coast of British Columbia have been scratching their heads over a strange pattern of crushed seashells that has appeared on a local beach (see above figure). The white shells form a rectangular lattice on the beach and local councillor Billy Yovanovich says that the pattern is a natural phenomenon. However, government scientist Richard Thomson disagrees and says that the pattern was made by humans – possibly as a prank.

So, is this a practical joke, or an example of an emergent phenomenon driven by the action of waves and currents? I’m no expert, but I’m with Yovanovich. You can read more about these patterns on the CBC website.

In 1977 the Nobel laureate Leon Lederman published a tongue-in-cheek proposal to build a collider using existing subway tunnels in New York City. The city was suffering a financial crisis and Lederman reckoned physicists could acquire the tunnels for a knock-down price.

Hot political issue

Lederman’s proposal has inspired Caltech physicist David Hitlin to propose building another collider to address a hot political issue of today – building a wall on the US–Mexican border. In “The Very Big ILC”, Hitlin describes how long, straight sections of the border between the states of Sonora and Arizona could blocked by a huge linear particle collider.

Hitlin’s collider would be 300 km long and could achieve a centre-of-mass energy of 5 TeV. In contrast the proposed International Linear Collider in Japan is a mere 31 km long with an initial energy of 250 GeV. What’s more, with the addition of a bit or razor wire on top, Hitlin says the structure would meet Donald Trump’s specifications for a border wall.

And what would Hitlin call the facility? The TrumpILC, of course.

Residual gas analysers: it’s the small details that deliver success in UHV/XHV systems  

Big science, it seems, is often about the small details – and doubly so when it comes to the ultrahigh-vacuum (UHV) and extreme-high-vacuum (XHV) systems that play a core enabling role in large-scale research facilities such as the Large Hadron Collider (LHC) at CERN, the European Spallation Source (ESS) in Sweden, and the ITER nuclear fusion reactor in France.

Achieving and maintaining UHV/XHV conditions at scale – broadly the pressure regime from 10−7 mbar through 1012 mbar and lower – is a complex engineering challenge that would simply not be possible without the online, real-time diagnostic capabilities of compact and robust quadrupole mass spectrometers known as residual gas analysers (RGAs).

These workhorse instruments effectively “police” the UHV/XHV environment at a granular level – ensuring safe and reliable operation of large-scale research facilities by monitoring vacuum quality (detecting impurities at the sub-ppm level), providing in-situ leak detection and checking the integrity of vacuum seals and feed-throughs.

Vacuum versatility

One of the leading suppliers of RGAs to the big-science community is UK-based manufacturer Hiden Analytical which, as well as the aforementioned facilities, services a posse of high-profile global customers with its RGA offering – among them Brookhaven National Laboratory (BNL), SpaceX and NASA in the US; the European Space Agency and the European Gravitational Observatory; as well as the Diamond Light Source and the Culham Centre for Fusion Energy (CCFE) in the UK.

“We have RGAs deployed at all these big-science sites and more,” says Peter Hatton, managing director of Hiden. “Our instruments are used not only for routine UHV/XHV monitoring, but also to support the advanced research projects that underpin all large-scale science facilities – whether that’s surface analysis via secondary-ion mass spectrometry, UHV thermally programmed desorption studies or all manner of gas analysis applications.”

Peter Hatton

Hiden, for its part, has more than 35 years experience as a supplier of application-specific quadrupole mass spectrometers, including RGAs. As such, the company’s in-house manufacturing model combines state-of-the-art cleaning, metrology and assembly techniques with the firmware and software needed to give its analysers the sensitivity, stability and dynamic range for diverse research and industry applications.

“The collaborative nature of our business is a big differentiator,” Hatton explains. “We listen and learn from all our customers, who gain from having personal support from the engineers directly involved in the development, manufacture and test of their product.”

A case study in this regard is the National Synchrotron Light Source II (NSLS-II) at BNL in Upton, New York. Hiden was selected as a preferred supplier for NSLS-II in 2011 after extensive evaluation of its RGA product line for UHV/XHV applications – specifically the HAL 201 RC, which offers a minimum detectable partial pressure of 5×1014 mbar. Since then, it’s been a productive relationship in terms of sales and product innovation. For starters, the scale and complexity of projects under way at NSLS-II necessitate a networked vacuum diagnostics capability and, all told, there are now more than 100 Hiden RGAs integrated with the laboratory’s central control systems via EPICS software drivers.

“Supplying projects like NSLS-II also requires full cognisance of some pretty harsh operating environments,” adds Hatton. “With this in mind, we have developed a ‘radiation-hard’ RGA (the HAL 101X) that can operate with no smart electronics within 100 m of the analyser location – a tough ask given that everything is micro-controlled these days.”

Collaboration equals innovation

In Europe, meanwhile, Hiden’s eagerness to learn from its customers is equally prominent – perhaps most notably CERN’s vacuum, surfaces and coatings group. With operational responsibility for the particle physics laboratory’s extensive vacuum infrastructure, this team of more than 60 scientists and engineers also manages a network of 200+ RGAs across the CERN site in Geneva – approximately a quarter of those analysers being supplied by Hiden.

There are three main applications for RGAs at CERN: commissioning of UHV/XHV systems in the laboratory’s particle accelerators and detectors – monitoring of possible contamination or leaks, for example, between experimental runs of the LHC; pass/fail acceptance testing of vacuum components and subsystems – collimators, magnets, pumps and the like – prior to deployment in the accelerators and detectors; and a range of offline R&D activities, including low-temperature UHV/XHV characterization and desorption studies of advanced engineering materials.

“What we appreciate from Hiden is their responsiveness – we always get a quick answer on any after-sales issues regarding hardware or software,” explains Sophie Meunier, senior vacuum engineer with responsibility for RGA technologies at CERN. “They know their products inside out,” she adds, “because they handle all aspects of the manufacturing and software development in-house.”

That forensic product know-how and attention to detail proved to be essential in addressing CERN’s stringent outgassing requirements for its UHV/XHV systems – and in particular the hydrogen outgassing rate of the RGA ion source (which must be less than 1×10mbar·l/s two hours after switch-on). “The outgassing rate of the ion source is a critical success factor in RGAs destined for UHV/XHV applications,” explains Meunier. “In simple terms, we want to measure the partial pressure of our UHV/XHV systems – not the outgassing of the RGA ion source.”

Achieving this figure of merit – and delivering an ion-source solution that meets CERN’s RGA specifications for the long term – was very much a collaborative endeavour between customer and vendor. The joint testing and optimization effort addressed the ion-source components, enhanced source geometries and evaluation of materials compatible with vacuum-firing to 900 °C. All of which ultimately enabled Hiden to implement its own custom manufacturing set-up, pretreating RGAs with specialist cleaning, vacuum-firing and bakeout procedures prior to deployment at CERN.

That customer-centric approach also informs Hiden’s software development. Consider the RGA user interface and data visualization. Meunier and her colleagues at CERN have a requirement for the laboratory’s RGAs to provide a read-out of ion current as the primary measurand (rather than pressure, which requires a conversion factor). “When we asked, Hiden delivered, incorporating our request into the latest version of its RGA software MASsoft Professional,” Meunier notes. What’s more, MASsoft provides additional flexibility for big-science end-users by allowing instrument control via USB 2.0, RS232 or Ethernet data protocols.

Hardware and software innovation notwithstanding, Hiden’s RGAs must also measure up against another unforgiving benchmark – reliability – if they are to enable big-science facilities to minimize vacuum downtime and ultimately reduce their operating costs. “Reliability and longevity are non-negotiable,” Hatton concludes. “That’s why, as well as a three-year warranty, all our products are supported by a lifetime application support guarantee. Worth noting also that we continue to support the first instruments that we manufactured over 35 years ago.”

Small details, it seems, really do go a long way in big science.

Hacking a path to innovation

Academics are often derided for their lack of entrepreneurial skills. Partly that’s because the curricula and training for undergraduate and postgraduate degrees are generally designed for those continuing in academia. Most students, however, prefer to explore other, non-academic career paths, in which the learning curve can be steep. Even students who do PhDs with an industry focus, for example at the UK’s centres for doctoral training, will face unusual challenges.

With such limited exposure outside the academic bubble, what else can be done for students who are interested in what industry may have to offer? What opportunities are there for students to meet and talk with people from industry and other backgrounds? One exciting way to address such issues, I’ve discovered, is for institutions to run “hackathons”. These were traditionally software-based challenges in which small teams of computer programmers and software developers created software. Those kinds of hackathons are still going strong, but they are now beginning to be used elsewhere with broader remits.

These hackathons usually run non-stop over a couple of days and can bring together people from a variety of backgrounds including science, arts, business and design to form interdisciplinary teams. In exposing researchers to an entrepreneurial, commercial and business environment in a very short space of time, they allow scientists to apply the skills they have developed during their degree in a different environment. Hackathons also highlight the importance of teamwork, resilience and communication skills.

Late last year, a group of PhD students from the University of Manchester hosted the first Graphene Hackathon at the Graphene Engineering Innovation Centre. It was a 24-hour event where 10 interdisciplinary teams, each of between four and six people, designed, prototyped and pitched a commercial product using conductive graphene inks in front of a panel for the chance to win investment and cash prizes. The event required not only prototyping a technology that was worthy of investment but also developing business cases to outline the value of the products.

In developing the graphene-ink products, the teams faced some unexpected challenges including malfunctioning Raspberry Pis, screen-printing problems and flimsy final prototypes. During the event, industry experts such as strategy consultants, business development managers and patent attorneys were on hand to advise the teams as they worked throughout the night.

The hackathon was a fantastic way for participants to apply what they had learnt and put it into practice. I was involved with a team that created “BackUP” – an array of thin graphene strain sensors that can be printed onto a fabric seat cover for lorry, bus and truck drivers. It worked by monitoring the pressure on different parts of the seat in real time to indicate bad posture. If a driver was leaning on one side, for example, then it would send real-time feedback to remind them to improve their posture. The product, which came second in the competition, was designed to increase the comfort and wellbeing of drivers and reduce the number of days they are forced to take off work due to debilitating back pain.

Innovation and skills

Events like the Graphene Hackathon foster innovation by challenging participants in a competitive environment, boosting the likelihood of conceptualizing potentially disrupting technologies. For me, the experience highlighted the importance of a “fail-fast” approach that differs from the slower pace of academic research in which projects often run for months and even years. Fail-fast is often used as a mantra within start-ups as it highlights the importance of determining the long-term viability of a product or strategy. If something is predicted to fail, it’s important to turn to a new idea without wasting more precious time and resources.

Indeed, the experience allowed me to apply the skills I developed from my physics degree to a commercial setting and made me realize the importance of thinking about fundamental research with a broader horizon. PhD students are in a unique place to develop their research and find commercial avenues for its applications. Hackathons can help widen the perspectives of participants, especially scientists who are looking to start spin-out companies. The skills needed to build a successful business are not too dissimilar from the attributes needed to become a successful scientist as both stand on the foundations of perseverance, problem solving and presentation skills.

I believe hackathons could be run in many other areas of research such as energy harvesting from renewable resources, applications using recycled materials, and waste recovery. A hackathon that marries software and hardware is a particularly innovative way for early-career scientists who want to break out into industry-based roles to gain valuable first-hand experience in an emulated start-up environment. Here’s to more hacks in the future.

Moth-eye nanostructures make good anti-icing coatings

Researchers in Vietnam have developed a transparent nanostructure with anti-icing properties that could keep objects such as aircraft wings and wind turbines ice-free in cold, damp conditions. The material, which is inspired by the structure of moth eyes, consists of a quartz substrate coated with a monolayer of nano-sized polystyrene beads. The ensemble is then covered with a flat, insulating layer of paraffin.

On cold days and at high altitudes, water vapour in the air transforms directly into solid ice, forming a thin coating on exposed surfaces. This coating reduces the lift of aircraft wings, blocks moving parts in ships and turbines, and sometimes causes serious motor vehicle accidents as well as damage to infrastructure such as electricity transmission systems.

There are two main approaches to improving the anti-icing properties of surfaces in these conditions. The first, active approach is to remove the ice using an external source of energy, such as heat. The second, passive approach uses physiochemical methods to modify the surface – with superhydrophobic materials, for example – so that it repels water.

SLIPs: an advanced anti-icing strategy

More recently, a new variant of the passive strategy has emerged: applying a coating to icing-prone objects that forms a defect-free liquid interface with the ice. Such coatings are known as slippery liquid-infused porous surfaces (SLIPs), and one way of making them is to cover a porous structure with a low-surface-tension lubricant that is immiscible in water, resists humidity and self-heals after ice treatment.

While the SLIPS studied to date have produced some good anti-icing results, none of them can prevent icing permanently because their lubricant layer degrades through evaporation and during de-icing. Physicists Nguyen Ba Duc of Tan Trao University and Nguyen Thanh Binh of Thai Nguyen University of Education sought to avoid this problem by creating a SLIP based on a nanostructure that mimics the structure of moth eyes, which are inherently ice-phobic.

In their experiments, Ba Duc and Thanh Binh used a plasma etching process to deposit polystyrene nanobeads onto a quartz substrate. This process produced a uniform structure of protrusions shaped like truncated cones with heights of 500 nm and top diameters of around 70 nm, as revealed in scanning electron microscopy measurements. The researchers then added paraffin wax to an n-hexane solution and coated the top of their nanostructure with the mixture. As a control experiment, they also applied the same thickness of paraffin wax/n-hexane coating to a bare quartz substrate.

Measuring adhesion forces

Next, the researchers attached their samples to a thermoelectric cooling module and gently placed a 5 μl droplet of deionized water onto the sample surface. After cooling the system down to -20°C, they used a load cell to measure how strongly an ice drop adhered to the freezing surface. They did this by moving the cell at a speed of 50 μm/s, which slowly pushed the ice droplet sideways until it detached completely. The force exerted on the cell could then be computed, and the researchers took the maximum force recorded to be the droplet’s adhesive strength.

Ba Duc and Thanh Binh also used a high-speed camera to record the icing process and determine the time it took for the entire water droplet to freeze solid. Another camera monitored changes to the interface between the water droplet and the surface. Finally, the researchers performed a “freezing rain” test in which they sprayed cold water droplets (maintained at temperatures of 0.5°C) ranging from 5 μl to 50 μl in size onto surfaces at 0°C, -5°C, -10°C and -15°C.

The results for the nanostructured surface confirmed its outstanding anti-icing performance relative to the control surface. The researchers also say that the hydrophobic nature of the paraffin layer proved key to the performance of the new structure in both the static (water droplet) and dynamic (freezing rain) experiments.

Delayed heat transfer

The researchers chose paraffin as their coating material because it is water-repellent and has a low thermal conductivity. When the paraffin coats the top of the nanostructured quartz substrate, the substrate becomes isolated from its environment, preventing heat from being transferred away. Air pockets trapped inside the nanostructure contribute to delayed heat transfer too, Ba Duc and Thanh Binh explain, adding that this extra insulation also increases the freezing time of any attached water droplets.

As well as anti-icing applications in industry and transport, the superhydrophobic nanostructured coated paraffin material might be suitable for applications such as eye glasses, Ba Duc says. This is because it is highly transparent and has anti-reflective properties, just like moth eyes. The researchers also report that the material is mechanically stable, and the paraffin-coated layer can easily be recovered after tests simply by heating it.

The new anti-icing structure is detailed in AIP Advances.

Why that massive black-hole merger is important, battling quantum decoherence on two fronts

This week the LIGO-Virgo collaboration announced the detection of gravitational waves from the most massive black-hole merger ever seen.  This podcast features an interview with LIGO–Virgo member and University of Portsmouth astrophysicist Laura Nuttall, who explains why scientists are puzzling over the origin of one of the black holes involved in the merger. She also looks forward to the next observing run of the LIGO–Virgo detectors and what astrophysical events they might capture.

The podcast also includes a round-up of what is new in physics this week, including two very different ways of dealing with decoherence in quantum computers, a new way of using X-rays to personalize the treatment of a serious eye disease and a breakthrough in our understanding of how bubbles pop.

A brief history of the Doomsday Clock: from nuclear risk to pandemics and climate change

Video transcript

We’re just 100 seconds from midnight on the Doomsday Clock – a metaphor created by the Bulletin of the Atomic Scientists in 1947.

The clock indicates how near we are to a humanity-ending catastrophe. We’ve never been this close to midnight before.

And what’s more, the last time the clock was set on 23rd January 2020, before the COVID-19 outbreak became a global pandemic. With the year we’ve had so far, who would bet against the clock ticking even closer to midnight in 2021?

But what is the Doomsday Clock and how is its time determined?

The clock emerged from the concerns of the physics community immediately after the Second World War.

Many scientists and engineers had taken part in the Manhattan Project, which developed the atomic bombs that the United States dropped on Hiroshima and Nagasaki in August 1945.

Just a few months after the war finished, two University of Chicago physicists – Eugene Rabinowitch and Hyman Goldsmith – launched the Bulletin of the Atomic Scientists.

This journal aimed to encourage scientists to engage in political issues. The war had made it painfully clear that even theoretical physics is no longer an abstract intellectual exercise, somehow divorced from the real-world.

Part of the journal’s remit was to consider future dangers. Or as Rabinowitch poetically put it: “to manage the dangerous presents of Pandora’s box of modern science”.

It was a desire to communicate these risks to the public that led to the Doomsday Clock being set up in 1947.

The idea emerged from the cover of the June edition that year, an artwork created by Martyl Langsdorf. Langsdorf placed the first clock at seven minutes to midnight for purely aesthetic reasons, but its subsequent positions were set by Rabinowitch.

When he died in 1973, a science and security board took over that responsibility, in consultation with the journal’s board of sponsors.

Over the years, the clock hands ticked further or closer to midnight depending on prevailing nuclear concerns.

Prior to this year, the closest the Doomsday Clock had been to midnight was in 1953. Then, it was set to 11:58 after both the US and the Soviet Union carried out hydrogen-bomb tests the previous year.

Its furthest distance from midnight came in 1991 when the clock was moved back to 11:43. That optimism followed the end of the Cold War and the signing of the Strategic Arms Reduction Treaty, which led to deep cuts in US and Soviet nuclear-weapon arsenals.

Another key change occurred in 2007 when the Doomsday Clock started factoring in the risk of climate change. Since then it has also started considering new disruptive technologies, including artificial intelligence and gene editing.

Each year, the Bulletin’s panel of experts meet in Chicago in November to vigorously debate whether the time should be reset – with the decision confirmed and announced in January.

The decision to set the 2020 clock so close to midnight was based on a combination of factors.
They included the continuing existence of nuclear arsenals coupled with the lapse of several major arms-control treaties, and America’s decision to quit the Iran nuclear deal.

Given the clock’s gloomy connotations, it’s not surprising that it has attracted some criticism over the years.

Commentators have accused the Bulletin of everything from inconsistency, historical pessimism, lack of clarity around its methodology, and political bias. Though it should be noted that the clock has moved forward and back during both Democratic and Republican administrations in the US.

Whatever people think of the clock, it has consistently met its goal of triggering public debate about the role of science in society.

There’s no doubt the COVID-19 pandemic will play into 2021 decision. Progress towards vaccines will surely be crucial, as might the tensions surrounding the US presidential elections in November.

Find out more about the Doomsday clock in the September 2020 issue of Physics World.

Tinted solar panels allow plants to grow efficiently on ‘agrivoltaic’ farms

Tinted solar panels could allow land to be used to grow crops and generate electricity simultaneously, with financial gains, according to researchers in the UK and Italy. The orange solar panels absorb some wavelengths of light, while allowing those that are best for plant growth to pass through. The team even claim that their setup can produce crops offering superior nutrition.

Agrivoltaics uses land to simultaneously grow crops and produce electricity from solar panels. Usually opaque or neutral semi-transparent solar panels are used. Now, Paolo Bombelli, a biochemist at the University of Cambridge, and his colleagues used orange-tinted, semi-transparent solar panels to see if selective use of different wavelengths of light for plant growth and electricity production could offer additional benefits. The solar panels allow orange and red light to pass through, as these wavelengths are the most suitable for plant growth, while absorbing blue and green light to generate electricity.

The researchers grew basil and spinach in greenhouses in northern Italy with the glass roofs replaced with semi-transparent, orange-tinted solar panels. Even though the yield of both crops was reduced compared with plants grown in standard greenhouses, the agrivoltaic system offered a financial advantage over standard growing conditions, they report in the journal Advanced Energy Materials.

Financial gains

Overall, the spinach and the electricity produced in the agrivoltaic greenhouses were worth about 35% more than a spinach crop grown in a standard greenhouse. While basil and the electricity generated offered financial gains of around 2.5%. This was based on the wholesale global market price of the crops and the local feed-in-tariff for selling electricity to the Italian national grid. According to the researchers, the substantial difference in financial gains occurs because basil sells for about five times the price of spinach. In other words, such an agrivoltaic systems offers more financial reward when used with lower value crops.

The yields of basil grown under the orange-tinted solar panels dropped by 15% compared to those in the standard greenhouses, while spinach yields fell by 26%. However, the researchers noticed some interesting differences between the agrivoltaic and traditionally grown plants. The plants grown beneath the solar panels demonstrated a more efficient photosynthetic use of light, and they produced more tissue above ground and less below ground. This resulted in differences in plant morphology, with the basil producing larger leaves and the spinach longer stems.

Additionally, laboratory tests found that both basil and spinach plants grown under the solar panels contained more protein than those grown in standard greenhouses. The researchers suggest that the changes in morphology, redirection of above and below ground metabolic energy, and the increases in protein could be adaptations to improve photosynthesis under reduced light conditions. They add that the accumulation of more protein is interesting “in view of the need for alternative sustainable protein sources to substitute animal proteins, for example, in plant-based artificial meats”.

More experiments needed

Bombelli told Physics World that the technique might be applicable in locations besides Italy’s Mediterranean climate, depending on the chosen conditions. He says, “It depends on the percentage of the land cover by solar panel and the type of crop chosen”, adding that the only way to know for sure is to conduct additional experimental work. Indeed, the team is now hoping to run a trial in the UK.

Earlier this year Brendan O’Connor and colleagues at North Carolina State University published a modelling study in the journal Joule looking at how much energy could be produced with the addition of the solar cells on greenhouses. Like Bombelli’s work, the study analysed solar panels that harvest energy from the wavelengths of light that plants do not use for photosynthesis.

O’Connor explains: “We found that there are greater opportunities in hot and moderate climates. Yet, the heating energy requirements in colder climates result in a significant cost to greenhouse growers, and offsetting those energy costs is critical. If the solar cells can be designed to minimize losses in plant yield, there should be benefits across different climate zones.”

O’Connor says that the latest study is impressive. “While there were some losses observed in crop biomass, they demonstrate a net economic benefit of the system, which is a very exciting result for the concept,” he explains. O’Connor adds that research on integrating solar cells with greenhouse structures is growing rapidly as “there is a need to reimagine producing food to meet human needs in the most environmentally friendly and sustainable manner possible”.

Upright treatment could increase patient comfort, reduce proton therapy costs

As new and improved radiotherapy technologies emerge, treatment conformality – the level of dose delivered to the tumour target and not to the rest of the body – increases alongside. For photon-based radiotherapy, for example, the development of intensity-modulated radiotherapy (IMRT) and the recent introduction of MR-guided systems has ramped this conformality. In proton therapy, meanwhile, the shift to pencil-beam scanning had a similar beneficial effect.

But since the early 2000s, progress in proton therapy has stalled somewhat. In particular, there’s still a general lack of the high-quality image guidance that’s available for photons. “Protons are up to three times more expensive in terms of cost, time and manpower, but reimbursement is only 1.3 times; and that is the reason that the proton space has flatlined,” says Niek Schreuder, president of proton therapy at Leo Cancer Care. “There’s just no R&D money in existing proton therapy facilities.”

Schreuder points out that various technologies with potential to enhance proton and other particle therapies are under development and shown to be feasible. These include, for example, prompt gamma measurements for range verification, proton radiography, dual-energy CT and optical guidance. But the large size of particle therapy gantries and the lack of space around the isocentre makes installing image-guidance technologies extremely difficult.

Upright approach

The solution, says Schreuder, is upright radiotherapy, where the patient is treated in an upright position and rotated in front of a static treatment beam. Upright treatment systems have lower installation costs and space requirements, freeing up resources for technology developments. “It also provides more comfortable treatment positions for patients with pulmonary problems, pain or claustrophobia issues,” he adds.

As such, Leo Cancer Care is developing a range of products to enable this novel treatment approach, including an automated patient positioning system. “All radiation treatments include two parallel processes – patient positioning and beam delivery,” explains Schreuder. “95% of the treatment time involves patient positioning, but up to now, in proton therapy, less than 10% of the system cost goes into patient positioning.”

To address this shortfall, the company is developing software that incorporates all of the imaging technologies used to position the patient into one user interface, with a treatment room control system focused on the systems needed to position the patient. “Of course, beam control is super important,” Schreuder adds, “but we argue that beam control has been developed to such a level of maturity that, as a company, we don’t have to worry much about this part.”

The positioning system can place the patient in appropriate postures according to the cancer type being treated. This includes, for example, seated vertical for head-and-neck treatments, with gravity pulling the shoulders naturally downward to better expose the head nodes; leaning slightly backward for lung and liver radiotherapy and slightly forward for breast cases; or perched, supported by the back of the thigh and a knee rest, for prostate and pelvic treatments.

Previously, all of these cases were treated the same, with the patient lying down, explains CEO Stephen Towe. “What we’re doing, for the first time, is using patient posture as a degree-of-freedom to truly optimize treatment delivery,” he says. “This is the first time that the radiation therapy problem is being addressed by starting at what the patient needs to be treated optimally and more comfortably.”

The company has also developed a vertical dual-energy CT scanner coordinated with the upright positioning system that scans the patient in any of these orientations and can perform a dual-energy scan in less than one minute. The idea is that the two devices will be placed together in a fixed-beam treatment room, within an existing proton or carbon-ion therapy facility, for example. Treatment is delivered by rotating the patient in the beam, rather than rotating a large gantry.

A major advantage of this approach is the reduced shielding requirements. If radiation is being delivered through 360°, this requires 360° of shielding. If the radiation is only sent in one direction, it’s possible to significantly reduce the room size and costs. It also reduces design complexity, which is particularly vital for developing markets, where expertise to design radiation shielding rooms may not be available.

Schreuder points out that, compared with a classical rotating proton gantry, an upright fixed-beam setup requires around 19 times less shielded volume. Even with a superconducting gantry, the shielding requirements are about 10 times less for the fixed proton beam. Ultimately, this could allow installation of a proton therapy system in an existing linac vault.

“The weight of a radiotherapy system ranges from six up to 600 tonnes for a carbon ion facility,” adds Towe. “Trying to rotate that mass around the patient with an accuracy of less than 1 mm makes no sense. It’s like changing a lightbulb by rotating your house.”

Clinical benefits

Rock Mackie, board chairman and co-founder of Leo Cancer Care, says that it’s not just a matter of cost, but that upright radiotherapy also confers some important clinical benefits. A study looking at thoracic cancer patients, for instance, showed that lung volume was on average 25% greater with upright rather than horizontal positioning, which reduces breathing motion. This is because the lung is more inflated as gravity pulls the diaphragm down. This could prove particularly useful for scanned proton treatments where motion can cause interplay effects, explains Mackie.

Reducing patient motion could also remove the need for motion management, making the treatment much faster. And where breath holds are required, these are easier for patients to perform when upright. A more inflated lung also reduces the dose to normal lung for proton therapy.

The team is also studying the impact of positioning on prostate motion. They saw that when a patient is upright, the level of bladder filling does not impact prostate position as it does when they are lying down, which is a huge benefit. “We started by focusing on making patients more comfortable, and this resulted in something that’s really clinically significant, the reduction in motion that comes from being positioned upright,” says Towe.

Leo Cancer Care is now working with proton therapy vendors to integrate the Leo technologies into existing fixed beam rooms and with one vendor to install its first system in a new proton therapy facility that will start construction at the end of this year. The company is also building a compact self-shielded linac for upright photon-based radiotherapy, which incorporates the positioning device and CT scanner. All systems should achieve FDA approval and CE marking by the end of 2021.

“There’s a problem that needs to be solved right now in the proton therapy space by reducing cost,” adds Schreuder. “Our approach could lead to more proton therapy systems across the world. And as a result of the huge reduction in the total facility costs, we believe that these facilities will have the financial means to support R&D projects to further develop many other aspects of proton beam delivery with endless benefits to patients.”

Copyright © 2025 by IOP Publishing Ltd and individual contributors