Skip to main content

Autonomous discovery of battery electrolytes with robotic experimentation and machine learning

Want to learn more on this subject?

Innovations in batteries take years to formulate and commercialize, requiring extensive experimentation during the design and optimization phases. We approached the design and selection of a battery electrolyte through a black-box optimization algorithm directly integrated into a robotic test stand. We report here the discovery of a novel battery electrolyte by this experiment completely guided by the machine-learning software without human intervention.

Motivated by the recent trend toward super-concentrated aqueous electrolytes for high-performance batteries, we utilize Dragonfly – a Bayesian machine-learning software package – to search mixtures of commonly used lithium and sodium salts for super-concentrated aqueous electrolytes with wide electrochemical stability windows.

This webinar presented by Venkat Viswanathan, will help the audience to:

  • Learn about the importance of robotic experimentation
  • Learn about machine-learning guided design of experiments
  • Learn about the frontier of remote experimentation

Want to learn more on this subject?

Venkat Viswanathan is an Associate Professor of Mechanical Engineering at Carnegie Mellon University. He received his PhD from Stanford University working on lithium-air batteries. His current research focus is on understanding and developing novel electrochemical devices for energy storage and utilization.

 

 

 

Solar flares are predicted by new model

A model that predicts when and where large solar flares will occur has been developed by a team led by Kanya Kusano at Japan’s Nagoya University. Their technique works by monitoring regions of high magnetic activity on the Sun’s surface and focusses on the instabilities triggered by reconnecting magnetic fields. Called the “κ-scheme”, their model could soon be part of an early warning system for incoming solar storms.

Solar flares are bright flashes on the Sun’s surface and are among the most dramatic and fascinating events in the solar system. Although the conditions that trigger them are still unknown, flares are  often associated with the “active regions” close to visible sunspots. These regions contain strong magnetic fields that store vast amounts of energy. When the topologies of these fields change suddenly, this energy is violently released; often resulting in powerful bursts of X-rays, plasma, and energetic particles.

Known as coronal mass ejections, these bursts can trigger powerful solar storms if they interact with Earth’s upper atmosphere – risking the safety of astronauts, spacecraft and satellites, as well as electrical grids and radio communications on Earth. It is critical, therefore, that we can predict precisely when and where solar flares will occur. Currently, however, early warning systems are limited in their efficacy because they rely on empirical models that cannot fully capture the complex, multi-scale processes associated with solar flare formation.

Double-arc loop

Kusano’s team has taken a new approach based around the process of “double-arc instability”. In their κ-scheme model, two surface regions with opposite magnetic flux are connected by two current-carrying loops of magnetic field lines. Due to shearing, these loops become crossed and reconnect with each other, forming a single, double-arc loop. This field line then moves upwards as the instability grows, allowing further, smaller pairs of loops to reconnect underneath it. Over time, this creates a positive feedback loop that ultimately releases vast amounts of energy.

To test κ-scheme, Kusano and colleagues used the model to analyse 205 active regions on the Sun that were monitored by NASA’s Solar Dynamics Observatory between 2006-2019. Overall, seven of these regions were responsible for solar flares powerful enough to trigger long-lasting storms on Earth. By monitoring the location and time evolution of each region, the κ-scheme accurately predicted when most of these flares would occur, up to 24 h in advance.

The model failed in its prediction of just two flares, which came from one specific active region that produced large flares without any accompanying mass ejections. Kusano’s team now hope to improve the κ-scheme’s predictions through upcoming observations from the 4 m Daniel K Inouye Solar Telescope, which first began operation in December 2019. The instrument will measure the Sun’s magnetic field structures and dynamics with unprecedented resolution, potentially allowing the team to produce far better forecasts of when and where flares will occur.

The research is described in Science.

Novel X-ray technology depicts real-time airflow through the lungs

Freda Werdiger

The ability to quantify and localize spatial variations in lung function could help doctors more effectively diagnose, monitor and treat many respiratory diseases. A multidisciplinary team of researchers in Australia has developed a novel tool for measuring regional lung function and used it to detect and characterize a disease in mice similar to cystic fibrosis (CF).

CF is a hereditary disease that causes the body to make abnormally thick and sticky mucus. This mucus can hinder breathing and lead to severe lung infections and permanent damage. CF progressively reduces quality-of-life and ultimately leads to premature death. Lung damage caused by CF is often heterogeneous, or non-uniform, so local treatments targeting patches of damaged tissue can slow the progression of the disease to preserve quality-of-life and lengthen overall survival.

The best method for assessing lung function consists of measuring how much air a patient can inhale and exhale, in addition to how quickly this volume of air is exhaled – a technique called spirometry. This information is then used to identify breathing patterns commonly associated with respiratory conditions, like CF. Critically, spirometry only assesses the global health of the entire lung and cannot localize subtle, non-uniform deficiencies.

Recently, Freda Werdiger from Monash University’s Department of Mechanical and Aerospace Engineering led a team of biomedical engineers, physicists, mechanical engineers and respiratory physicians from Adelaide Women’s and Children’s Hospital and the University of Adelaide to overcome these limitations. This collaborative team applied X-ray velocimetry (XV) to non-invasively generate high-definition and sensitive images of the real-time airflow through the lungs of mice.

X-ray velocimetry

XV is a combination of high-speed imaging and post-processing analysis that produces a detailed ventilation map of the lungs.

To achieve high-speed image acquisition, Werdiger and her team utilized propagation-based phase-contrast X-ray imaging (PCXI). Unlike conventional radiography, which relies solely on X-ray absorption, PCXI additionally leverages the diffraction of X-rays at material interfaces.

Propagation-based imaging

By placing the detector further from the sample than is typical for radiography, diffracted X-rays can interfere with the un-diffracted X-rays. This produces an interference pattern that highlights structural boundaries. This results in enhanced image contrast and enables high-resolution images to be acquired in a brief time.

When combined with tomography methods, PCXI can rapidly generate detailed, three-dimensional images of fine lung structures.

The post-processing analysis for XV involved the multiple tomographic PCXI images acquired throughout the breathing cycle. The researchers used a computational method known as particle image velocimetry to calculate displacement vectors between consecutive images and determine the three-dimensional speed and direction of lung motion throughout the breathing cycle. This provided an entirely non-invasive way to calculate the volume of air that flows through the individual airways of the lungs.

Characterization of CF-like disease in mice

The researchers tested the feasibility of using XV to acquire reliable and quantitative measures of respiratory function in a cohort of mice with CF-like lung disease and their healthy littermates. Their results were recently published in Scientific Reports.

First, they generated maps of regional lung expansion for both healthy and diseased mice. These maps clearly showed the presence and location of confined areas of airflow deficits in the diseased animals, a characteristic of the patchy nature of CF.

XV imaging

Next, the researchers developed a method to quantify the distribution of disease in individual mice. Their novel quantification correlated well with measurements by the conventional forced oscillation technique, while offering a higher level of specificity.

“This study shows how researchers from many different backgrounds can come together in collaboration and work together to improve the lives of people living with this terrible disease,” says Werdiger. “I hope to see XV being used to diagnose and monitor many different diseases,” she added, a wish that is likely to be realised. With its recent commercialization by Andreas Fouras of 4DMedical, this technology is well positioned to assist in global research and clinical care that could improve the length and quality-of-lives of people with CF and other respiratory diseases.

Superconductivity theory comes a step closer

How do materials that are normally insulators manage to conduct electricity without resistance? Physicists have been debating this basic question about cuprates – ceramic compounds made up of layers of copper and oxygen atoms, interleaved with atoms of other elements – for more than three decades and have yet to come up with a satisfactory answer. Now, however, researchers in the US have put forward a model that at least provides clues as to what such an answer might look like, based on studies of so-called “overdoped” cuprates.

In 1986, Georg Bednorz and Alex Müller discovered that, by tweaking cuprates’ chemical composition, they could make these normally-insulating materials superconduct. What is more, the superconducting transition temperature of the doped cuprates far exceeded those of previously-recognized metallic superconductors – meaning that the established Bardeen–Cooper–Schrieffer (BCS) theory of low-temperature superconductors must be incomplete.

This finding led to much excitement, and subsequent studies succeeded in pushing the cuprates’ superconducting transition temperature up to around 150K. More recently, other materials – hydrides subject to very high pressure – have been found to superconduct at even higher temperatures. Yet the dream of room-temperature superconductivity remains elusive, and – frustratingly – physicists still haven’t agreed on a theory to explain the source of cuprates’ superconductivity.

Jumping electrons

Multiple studies have shown that cuprates superconduct when their electrons jump from one copper atom to another. Because electrons strongly repel each other, however, they usually exist in a static formation, with one electron sitting immobile on each atom. Undoped, therefore, cuprates remain in an insulating, antiferromagnetic state known as a Mott insulator, with neighbouring electrons having opposite magnetic spins.

When a cuprate is doped with atoms of other elements such as lanthanum, strontium or yttrium, some of its electrons are replaced with holes, freeing the remaining electrons to flow without resistance. Too much doping, though, and the effect reverses: beyond a specific concentration of dopant atoms, the cuprate’s transition temperature drops until it eventually becomes a metal. The latest research focuses on these “overdoped” cuprates, putting forward a model that aims to reconcile some apparently contradictory experimental results.

Contradictory behaviour

According to BCS theory, metals become superconducting when electrons partially overcome their mutual repulsion by creating what are known as Cooper pairs. These pairs form thanks to vibrations in the metallic crystal lattice: when one electron draws slower-moving ions towards it as it passes through the lattice, it generates a small concentration of positive charge that then pulls another electron in behind it.

Since electrons are fermions, they obey the Pauli exclusion principle, forming a Fermi gas in which the system’s energy levels fill from the bottom up with one electron per lattice site and spin. The boundary between the highest occupied energy level and the next is known as the Fermi surface, and the electrons that form Cooper pairs are those nearest this surface.

In a metallic superconductor, all electrons contribute to superconductivity, not just those bound up as Cooper pairs. In practice, this means that the density of superconducting electrons should be similar to the overall electron density. However, in 2016 Ivan Božović and colleagues at the Brookhaven National Laboratory in the US showed that this is not the case for overdoped cuprates. Instead, after devising a new way of preparing overdoped copper oxides (which are chemically unstable), they found that the density of superconducting electrons in these materials is actually far lower than would be expected from BCS theory.

Explaining the disparity

The new model seeks to explain this disparity. In doing so, it takes inspiration from work carried out by Japanese physicists Yasuhiro Hatsugai and Mahito Kohmoto. In the early 1990s, Hatsugai and Kohmoto proposed a way of overcoming the apparent contradiction between the statistics of a Fermi gas (which stipulate one electron per lattice site and spin) and a Mott insulator (which allow just one electron at each site). Their solution was to introduce what is, in effect, a new repulsive interaction that prohibits two electrons with opposite spins from occupying the same momentum state.

Philip Phillips and colleagues at the University of Illinois at Urbana-Champaign have now revised this model by adding a further interaction that draws electrons together to create Cooper pairs. As they report in Nature Physics, this model includes a Fermi surface, but also a much-reduced density of superfluid electrons compared with a normal Fermi gas – bringing it more in line with the observations made by Božović and colleagues at Brookhaven.

Jan Zaanen of the University of Leiden in the Netherlands, who was not involved in the present work, points out that the Urbana-Champaign team’s modelled density falls off more gradually with increased doping than was observed in the Brookhaven experiment. Nevertheless, he believes that the new scheme provides a “proof of principle” for showing how a non-Fermi liquid could have a Fermi surface.

Since both Hatsugai and Kohmoto’s model and the Phillips’ group’s adaptation posit new interactions between electrons, while also describing those electrons as waves rather than particles, Zaanen acknowledges that it is hard to envisage them as real physical descriptions of what is taking place. However, he points out that a similar problem occurs in the well-established theory of Lev Landau that describes helium-3 and other systems in terms of strongly-interacting fermions.

As Zaanen explains, Landau’s scheme relies on emergent physics involving highly collective quasiparticles. A similar logic, he concludes, may be needed to understand overdoped cuprates. “Although nonsense on the microscopic scale,” he says, “the Hatsugai-Kohmoto interactions may spring into existence in the collectivization process to become eventually literal on the macroscopic scale”.

NASCAR: the science of racing safely

Ryan Newman must have been feeling pretty good as he rounded turn four on the last lap of the Daytona 500 on 16 February this year. The 42-year-old NASCAR driver was not only poised to break a 104-race winless streak, he was about to do it at the season’s most prestigious race.

Then everything changed.

Newman’s car was hit from behind and turned. It slammed into the outside track wall so hard that it flipped over, then slid – upside down – back into traffic, where an oncoming car smashed it into the air. After landing, Newman’s battered #6 car slid a couple of hundred metres on its roof before finally coming to a stop.

No NASCAR driver has died or been permanently injured in any of its top three series since 2001. As Newman was helicoptered off by emergency personnel, I couldn’t help but fear that this streak was about to end. But 42 hours later, Ryan Newman walked out of the hospital. It wasn’t until NASCAR cleared him to race again at the end of April that he revealed he remembers almost nothing about the crash. Three months and one day after his accident, Newman finished 15th in the first race since the coronavirus shut down NASCAR in March.

The risk in racing

The National Association for Stock Car Auto Racing – NASCAR – grew out of the illegal liquor trade in the south-east US. Moonshine runners who had souped up their “stock” cars to outrun the law started racing each other. Today, NASCAR’s top level, the NASCAR Cup Series, runs 36 races each season at ovals and a few road courses. The shortest tracks are about 800 m and the longest 4.1 km, with race lengths ranging from 500 to 965 km. The racecars themselves have gone through six generations of development, each featuring major changes and improvements from its predecessor. However, since its founding in 1947, some 32 NASCAR drivers have died during racing, qualifying, practice and testing in the top three series – 28 in the Cup Series alone.

In the past, NASCAR was mostly reactive when it came to safety, and some of the most vocal opposition to safety innovations came from drivers themselves. That changed after four deaths over a 10-month period in 2000–2001. The string of tragedies started with Adam Petty, the fourth generation of a legendary NASCAR family, and ended with Dale Earnhardt’s death at the 2001 Daytona 500. As NASCAR then started developing its fifth-generation racecar following Earnhardt’s death, the newly created NASCAR R&D Center was charged with making safety a top priority.

Nowadays, Newman – a Purdue University-educated engineer – is one of NASCAR’s most vocal safety advocates, especially when it comes to superspeedways like Daytona. The 4 km tri-oval (a cross between a triangle and an oval) has long straightaways and 31° banked turns that make it one of the fastest tracks in NASCAR. In fact, it’s a little too fast. At high speed and high yaw angle (the angle between the front of the car and the direction the car is heading), the laws of aerodynamics make stock cars behave more like aeroplanes than automobiles. Over the years, eight NASCAR Cup drivers have died at Daytona, including Earnhardt – more than at any other track.

After Bobby Allison’s car became airborne and almost went over the catchfence at Daytona’s sister track, Talladega Superspeedway, in 1987, NASCAR mandated “restrictor plates” at both tracks. These limit airflow into the engine, which in turn restricts how much fuel can combust, therefore capping the car’s speed. But even with the engine throttled back to about 410 kW (550 horsepower), the cars routinely reach speeds of 320 km/h (~200 mph).

When packs of cars travel centimetres from each other at speeds above 300 km/h, even one tiny misjudgement can set off a massive chain-reaction crash

Yet, this is still not fast enough for racers. The only way to get more speed out of a car with a restricted engine is to follow another so closely that air flows over both cars as if they were one. The decrease in net drag from this “drafting” lets two cars move 5–8 km/h (3–5 mph) faster than either car could alone. But when packs of cars travel centimetres from each other at speeds high enough to traverse a football field in the blink of an eye, even one tiny misjudgement can set off a massive chain-reaction crash.

Racecar crashes are more dangerous than street-car accidents because racecars have so much more kinetic energy. A typical passenger car going 110 m/s (~70 mph) has 0.5 MJ of kinetic energy. A NASCAR racecar at top speed carries 12 times that – about the energy stored in 1.4 kg of TNT.

When a racecar stops, all this kinetic energy must be converted to other forms of energy. This happens over a timescale of seconds when a car comes in for a pitstop. Kinetic energy changes into heat (e.g. in tyres and brakes), sound (squealing brakes and screeching tyres) and light (glowing brake rotors). In a crash, energy is converted over a much shorter period of time, which leads to higher peak forces. In addition to heat, light and sound, energy may go into deforming the car, or producing rotational motion like spinning or flipping. Managing that energy is the key to keeping drivers safe.

A science experiment on wheels

When NASCAR started, racecars were made by putting roll cages inside ordinary road vehicles. As speeds increased and aerodynamics became more important, it was easier (and safer) to build the car around the roll cage. The fifth-generation racecar marked the first time NASCAR sent a computer-aided design file of the chassis to teams. While each team was allowed to individually develop features such as suspension set-up, NASCAR specified the dimensions, positions and wall thicknesses of every single tube in the chassis to ensure safety standards were kept across all teams. Each of the couple of dozen chassis a team makes over the course of a season is inspected by NASCAR using digital indexing arms and 3D laser scanning. Those that pass are certified with tamperproof radio-frequency identification (RFID) tags that are checked each time the car is presented for competition.

The strongest tubes in the chassis are closest to the driver, allowing the parts of the car furthest away to crush first. The front section is designed to push the engine downward instead of into the driver’s compartment in the event of a collision. The four horizontal doorbars on each side are staggered so that the top bar crushes first, then the next one down and so on, and they’re covered by a metal anti-intrusion plate.

The car’s “greenhouse” (the windscreen, side and rear windows, roof, and the structure supporting it all) poses a more formidable challenge. The driver needs a clear field of vision and an ability to escape the car quickly in the event of fire. That requires a design providing maximum strength with a minimum number of structural components. The windows themselves are made of high-strength laminated polycarbonate. While they can fracture, they are very hard to break, and because there is a layer of polymer film between the two sections, and adhesive mylar tear-offs on the front, any broken pieces can’t go flying too far.

Ryan Newman 2020 crash and the Newman bar

An “Earnhardt Bar” was mandated after a 1996 Talladega crash in which another car’s nose crashed through Earnhardt’s windscreen and broke his sternum. This bar runs vertically down the centre of the windscreen, preventing anything large – like the nose of another car – from getting into the driver’s cockpit. The current Gen-6 car, introduced in 2013, also features a “Newman Bar” that provides extra roof reinforcement. The bar was named after Newman, who was vocal in calling for stronger roofs after going airborne twice in one year at Talladega. It then helped saved his life in the Daytona 2020 crash. The Gen-6 car also enlarged the greenhouse to put more distance between the roof and the driver’s head. The driver’s seat was moved 5 cm away from the door for extra protection against T-bone crashes, in which the nose of one car perpendicularly impacts the driver’s-side door of the other.

NASCAR chassis are built from magnetic steel, rather than materials like titanium aluminides to keep the cost down, but there are plenty of hi-tech materials protecting the driver. The thermoplastic composite Tegris (made by the US materials manufacturer Milliken & Co.) was first used for the newly introduced “splitter” on the fifth-generation car. Jutting out from the bottom of the front bumper and creating higher pressure on top than on the bottom, the device “splits” the air. The increased force on the car’s front end provides more friction – or “grip” – between front tyres and the track. A spoiler accomplishes the same task in the rear.

Tegris starts out as a cold-drawn, polypropylene oxide (PPO) tape yarn that is then woven into sheets. Unlike carbon-fibre composites, Tegris doesn’t require resin. The drawing process produces a highly oriented PPO core surrounded by an amorphous coating. Because the amorphous material melts at a lower temperature than crystalline PPO, the amorphous sections fuse together when the fabric is heated under pressure. Although Tegris has about 70% the strength of carbon fibre, it is more impact-resistant, lighter and about a tenth of the price. A sheet of Tegris on the driver’s-side door over the chassis frame prevents anything from getting through the doorbars and into the cockpit.

Before the exterior sheet metal goes on, blocks of energy-absorbing IMPAXX – a highly engineered, extruded thermoplastic foam – are fitted over the doors. In contrast to squishy foams that extend the time of collision and decrease the peak force, IMPAXX is rigid. Rather than temporarily storing energy, the stiff foam deforms, absorbing energy equal to the applied force multiplied by the foam’s displacement. The energy used to crush the foam is energy that never reaches the driver.

The final layer of protection

Everything inside the car is also optimized to manage kinetic energy. In fact, today’s aluminium or carbon-fibre driver-containment seats look more like something you’d find on a rocket than in a car. Custom-poured resilient-foam inserts conform to the driver’s body and padded arms wrap around the driver’s shoulders, pelvis and head to minimize side-to-side and backward motion.

One thing seats can’t protect against, however, is forward motion. NASCAR requires a six-point driver restraint harness, with wider belts than on a street car to distribute force better. One polyester belt runs over each shoulder, another two hold the pelvis, while two further belts go around the legs to prevent “submarining” – the driver slipping under the belt in an impact. Racing restraint belts are sturdier than passenger-car belts because they must tolerate higher forces, but they still stretch a little in an impact to lengthen the time over which the driver comes to a stop.

safety seat

In the 1990s restraint belts had been improved so that they held drivers’ bodies tightly to their seats during a crash, but there wasn’t anything holding their heads. A typical human head weighs around 4.5 kg. Combine that with a 1.3 kg helmet, and in a 40g collision (deceleration at 392 m/s2), that head can snap forward at an acceleration of 107g (almost 1050 m/s2) – more than enough to break the driver’s neck. Basilar skull fractures – caused by a driver’s head whipping forward – became one of the most common causes of on-track deaths, including that of Earnhardt.

Since 2001 NASCAR has mandated the use of head-and-neck restraints. These devices consist of a harness worn around the chest or on the shoulders, with tethers that connect to the helmet. When the car stops suddenly, the tethers stop the head from snapping forward and instead slow it down gradually.

Solving one problem, however, often uncovers others. In the last few years, Dale Earnhardt Jr – the late Earndardt’s son who retired from NASCAR racing in 2017 – has raised awareness of crash-induced concussions, which has caused many drivers to re-examine their helmet choice. In fact, the 2020 Daytona 500 was only the second race in which Ryan Newman was using a new type of helmet. His head gear came from Arai, a company specializing in the more demanding needs of motorcycle racers, who have only their helmet between them and the track.

Helmets must serve a range of purposes: they need to be fireproof, cushioned to minimize forces on the head, and hard enough to prevent anything from piercing or crushing them. They also need to be as light as possible. Arai uses a proprietary combination of fibres and resin to create a fibreglass that is 30% stronger than traditional fibreglass but weighing just 0.5–0.6 kg rather than the 1.3–1.5 kg of a typical motorsports helmet.

Newman’s helmet also incorporated a layer of Zylon (a thermoset liquid-crystalline polyoxazole) around the crown area. Zylon, which has a tensile strength 1.6 times that of Kevlar, is just starting to appear in NASCAR, although Formula 1 has used Zylon tethers since 2001 to prevent wheels that come off cars in crashes from becoming projectiles. Indeed, Formula 1 also requires Zylon reinforcement in all helmets after a 700 g spring came off another car and hit driver Felipe Massa’s helmet in 2009, penetrating the headgear and fracturing his skull.

Newman and his team made his car as safe as possible. But there’s one more element that played a big role in helping him survive the accident.

SAFER tracks

Originally, concrete racetrack walls and the wire catchfences that followed were designed to keep cars out of the grandstands rather than to protect drivers. But by 1998, a staggering 41 drivers had died at Indianapolis Motor Speedway (which is not just used for NASCAR) through collisions with walls and other cars – some of the latter caused by track walls redirecting the car back into the traffic. Tony George, then-president of IndyCar (another US racing series) and owner of the speedway, set out to make the track safer.

The probability of two cars crashing into the same spot within moments of each other on a motorway is low, so barriers that cushion an impact by breaking or permanently deforming are sufficient. But they won’t work on a racetrack, where a dozen cars may be involved in a single accident. In addition, a racetrack barrier must be quickly repairable (and quickly cleaned up) so the race can continue.

The first instinct was to make the walls softer. Many tracks still use bundled-up old tyres, which works well for lower-energy impacts, but not the high-speed collisions at Indianapolis. In 1998 IndyCar tested  40 cm-diameter, high-density polyethylene (HDPE) tubes that were placed in front of the speedway’s existing concrete walls and covered by inch-thick overlapping HDPE plates. Known as the Polyethylene Energy Dissipation System (PEDS), it allowed the car’s kinetic energy to move the plates and deform the cylinders. Unfortunately, although the PEDS successfully decreased the accelerations the drivers experienced when hitting the wall, it scattered debris all over the track. The HDPE also tended to “grab” a car when it hit, stopping it instead of slowing it down.

The SAFER barrier

IndyCar engaged Dean Sicking (an engineer and inventor who was then at the University of Nebraska) to redesign the PEDS, but he realized the problem wasn’t the concept, it was the material. So instead of using plastic, Sicking proposed steel. After overcoming plenty of scepticism that a harder material would provide better protection – and with funding from Indycar – he started work on the Steel And Foam Energy Reducing (SAFER) barrier. When NASCAR joined the project a few years later, he had to modify the design because the two types of cars carried different amounts of kinetic energy. Any barrier had to work for both.

The track-facing part of a SAFER barrier has five hollow-steel tubes, each with a 20 cm2 cross section and 0.5-cm-thick wall, stacked one atop another and stitch-welded together. High-strength nylon straps connect the impact surface to the track’s existing concrete wall. The space between the two walls is filled with 50-cm-thick wedges of foam, with the wedges’ narrow ends closest to the track.

When a car hits the SAFER barrier, its kinetic energy is used to move the massive steel wall and crush the foam wedges. The wedge shape helps the wall respond to the different kinetic energy scales of different cars. Indy officials routinely saw 100g and greater peak accelerations when cars hit concrete walls. The SAFER barrier brought most hits down to the 60–65g range. Today, SAFER barriers are mandatory on all IndyCar and NASCAR tracks (except Eldora Speedway, a small dirt track). The walls aren’t exactly soft, but they do save lives.

I would never deny that there was an element of luck involved in Ryan Newman being able to walk away from a spectacular crash with minimal injury. But most of the credit goes to the scientists and engineers – and the drivers – who refuse to accept that death is an inevitable part of racing.

Colliding star may have switched-off black hole’s X-ray corona

Over the course of just one year, the bright X-ray corona surrounding a supermassive black hole dipped dramatically in brightness, before steadily recovering its initial luminosity. The event was observed by an international team of astronomers, led by Claudio Ricci at Diego Portales University in Chile, who suggest that the dimming could have been caused by a wayward star being torn apart by tidal forces. The team’s findings could lead to a better understanding of how X-ray coronas form in the first place.

Most normal galaxies are centred on a super massive black hole that creates an active galactic nucleus (AGN) – an extremely bright source of radiation created as matter accelerates into the black hole. AGN’s are known to host bright coronas of X-rays within their inner accretion disks. While the processes by which these structures form are not fully understood, they are thought to involve tangled magnetic fields situated close to the black hole. As these fields interact with hot electrons in the accretion disk, energy is transferred to surrounding photons to create high-energy X-rays.

In March 2018, Ohio State University’s All-Sky Automated Survey for Supernovae programme detected that an AGN 100 million light-years away called 1ES 1927+654 had surged to around 40 times its usual brightness at optical and ultraviolet wavelengths. This prompted Ricci’s team to closely monitor the object’s X-ray corona for any subsequent changes. To do this, they took frequent, measurements of the AGN using NASA’s NICER telescope, located aboard the International Space Station.

Dramatic drop

Following the initial brightening event, NICER’s measurements showed that the AGN’s corona plummeted in brightness by a factor of 10,000 in less than two months. In one instance, its brightness dropped by a factor of 100 in just 8 h. Then, after reaching its dimmest point at about 200 days after the first optical observation in March, the corona steadily increased in brightness; almost reaching pre-outburst levels around 300 days after the beginning of the event – signalling that the structure had re-formed.

Until now, astronomers had thought that such dramatic variations could only play out over thousands, or even millions of years – leading Ricci’s team to propose an updated theory. Their calculations suggest that the event could have been triggered by a star that wandered too close to the black hole and was ripped apart by gravity.

Afterwards, debris from the star would have knocked away material in the AGN’s inner accretion disk, causing much of it to suddenly fall into the black hole. This would destroy the tangled magnetic field lines, abruptly switching-off the power supply that drives the X-ray corona. Afterwards, material in the disk would be replenished, allowing the X-ray corona to return.

If this theory is correct, it would have important implications for astronomers’ understanding of how AGN coronas form; suggesting that only magnetic fields within their tidal disruption radii could be responsible for creating the structures. Ricci’s team will now continue to monitor 1ES 1927+654 for any further variations in brightness, which will enable them to constrain their predictions further.

The research is described in The Astrophysical Journal Letters.

Acoustic tweezers move objects in the body remotely

Mohamed Ghanem et al

Acoustic tweezers that allow the remote manipulation of internal objects from outside the body could one day be used to expel urinary stones or control ingestible cameras. Researchers at the University of Washington and Moscow State University demonstrated the technique by trapping 3-mm glass beads in vortex-shaped beams of ultrasound. By steering the beams electronically, or by simply moving the ultrasound transducer, they directed the beads along complex 3D paths within a water tank and in the bladders of live pigs.

Optical tweezers have been used by physicists for decades to confine and manipulate microscopic objects. The technique is based on the fact that light refracted by a dielectric particle imparts a sideways kick, pushing the particle back when it deviates from the axis of a laser beam. This force keeps the particle at the centre of the beam where the light intensity is greatest. Acoustic tweezers work differently in that the trapped particle can also scatter the ultrasound beam. The particle therefore becomes trapped when the centre of the beam hosts a region of low intensity.

To create an ultrasound beam with such a configuration, Mohamed Ghanem, at the University of Washington, and colleagues configured a circular, 256-element ultrasound transducer such that the phase of each element varied according to its angular position on the array. This arrangement caused the sound waves to interfere destructively along the axis of the beam and constructively at the edge, generating a hollow, hourglass-shaped intensity pattern at the beam’s focus.

Acoustic tweezers

The researchers used this intensity pattern to capture a glass bead resting on a membrane within a tank of water. Once captured, they moved the bead in the xy plane (perpendicular to the array axis) by either moving the transducer or – at the risk of deforming the potential well and losing the bead – by steering the beam electronically.

As the bead was trapped just beyond the waist of the hourglass, the team could push it away from the transducer by extending the focus of the beam. By arranging the experiment so that the beam was directed upwards into the water tank, they used gravity as the returning force, allowing full control over the bead’s motion in the z direction.

In their in vitro setup, the researchers also experimented with different phase patterns and beam powers. They found that increasing the rate of phase variation around the transducer array produced a larger beam diameter and a deeper potential well for a given power output. Reversing the direction of this phase change shifted the helicity of the beam’s wavefront from clockwise to anti-clockwise or vice versa, arresting any rotational motion that could otherwise lead to the particle escaping from the trap.

To test the technique in vivo, Ghanem and colleagues implanted beads in the urinary bladders of three anaesthetized pigs. Directing the beam upwards into the pigs’ abdomens while the animals lay on their sides, the researchers moved the beads along complex paths several millimetres long. They point out that such distances are shorter than the 1–3 cm typically required to clear kidney stones or kidney stone fragments, but add that they are working on a system capable of steering particles along longer paths.

After the procedure, none of the animals showed any signs of injury caused by the ultrasound beam, though the researchers note that the rate of energy absorption – specifically, the ISPTA (spatial-peak temporal average intensity) – exceeded the safety limits defined for diagnostic ultrasound imaging.

“The ISPTA is conservatively set based on thermal tissue damage for developing embryos,” says Ghanem. “It is quite difficult for therapeutic techniques to come under such a threshold, but our tissue evaluation shows no tissue damage due to the ultrasound exposure.”

Apart from conducting further investigations into the technique’s safety, Ghanem and colleagues intend to test it under a range of conditions on randomly shaped targets with different acoustic properties. They also foresee it being used beyond the medical realm, with possible applications in zero-contamination manufacturing or laboratory environments.

Full details of the research are published in Proceedings of the National Academy of Sciences.

Physics and cars: an evolving journey

From combustion to aerodynamics, a deep understanding of physics has always been at the heart of car design. Today, physics-based technologies are playing an even greater role, as the electric car market expands and vehicles become increasingly autonomous. This video introduces some of the key themes of the August 2020 issue of Physics World – a special issue on physics and cars.

Physics in China attempts to rise from the pandemic

Researchers at the Wuhan University of Technology

A few days after the Chinese New Year in late January, Huiqian Luo – a physicist from the Institute of Physics at the Chinese Academy of Sciences in Beijing – flew to Australia. He was travelling to a neutron source in Sydney where he was set to study a novel iron-based superconductor. Luo planned to arrive early so that he could prepare the experiment at the facility before his students joined.

Luo’s travel occurred just as a previously unknown and highly contagious coronavirus was spreading through the Chinese city of Wuhan. Almost 1700 people had already died and hundreds of new cases were being confirmed every day in already overwhelmed local hospitals. Despite a strict lockdown in Wuhan, the infections worsened so quickly that many countries started to issue travel bans against visitors from mainland China.

Luo arrived in Australia on 30 January – two days before it imposed a travel ban meaning Luo had to complete the two-week-long experiment on his own. When his return flight was cancelled, Luo was forced to scramble to get back home, arranging a detour via Guangzhou, a city near Hong Kong. He finally made it back to Beijing on 15 February, just as the COVID-19 outbreak was nearing its peak in China.

Luo’s story typified the many disruptions that COVID-19 has caused to the lives of physicists around the world – and to experimentalists who work at large facilities in particular. Declared a global pandemic in mid-March, many facilities in Asia, Europe and the US shut down entirely, with only a handful remaining open, mostly to focus on coronavirus-related research. Lots of experiments, which often take years to plan, may now never happen.

Many countries are still struggling to battle the pandemic, which has swept the world and claimed hundreds of thousands of lives. China – where the virus first broke out – battled too. But as in other countries, researchers quickly adapted and discovered ways to remain productive during the lockdown.

Luo, for example, found time during the lockdown to pause, ponder and draw new inspiration from past research, leaving him relatively optimistic for the future. “I don’t think there will be long-term impacts on the discipline of physics as a whole, because the cycle of physics research is very long,” says Luo. “Scientists around the world are adapting well to the new reality of working remotely.”

Indeed, Luo and his colleagues were able to finish a few manuscripts during the lockdown, including a paper based on the February experiment in Australia. He also gave several online talks watched by more than 500 people – far more than ever saw the in-person talks he had hosted at the institute. And despite a few smaller virus outbreaks since the first wave passed, researchers in China are now starting to get back to the “new normal”.

GECAM

However, many scientists still face uncertain funding delays as the country recovers, while for other researchers the problems caused by the virus are more tangible. One such scientist is Shaolin Xiong – an astrophysicist at the Institute of High Energy Physics (IHEP), Chinese Academy of Sciences – who has been working around the clock with his team since officials lifted the original lockdown order in Beijing in late April . Xiong is principal investigator of a space science mission called the Gravitational-wave high-energy Electromagnetic Counterpart All-sky Monitor (GECAM), which is scheduled for launch by the end of the year.

The GECAM mission was proposed following the first detection of gravitational waves by the US-based Laser Interferometer Gravitational-Wave Observatory (LIGO). It is designed to catch gamma-ray bursts associated with gravitational-wave events – such as the merger of two neutron stars – and help locate the sources with high precision. GECAM consists of two small satellites, each weighing about 150 kg, which will orbit on opposite sides of the Earth to monitor the entire sky for these extremely energetic, short-lived bursts.

Most teams in China may face tighter budgets for the rest of 2020 and possibly beyond, but the entire physics community believes the difficulties are temporary

Yifang Wang

Xiong says that the temporary closure of electronics factories during the worst phase of the outbreak in China set the team back by a month. “We’ve been working very hard to deliver our commitment to the scientific community by sending GECAM into orbit by the end of 2020,” says Xiong, adding that they plan to deliver the assembled detectors to the satellite platform by mid-July. Once the satellites are in orbit, they will work together with ground-based detectors in the US, Europe and Japan to study gamma-ray bursts associated with gravitational waves. Physicists say that GECAM might spot several associated events each year. “GECAM will represent one of the first new ‘all sky’ devices that will be able to look for associated signals, wherever a gravitational wave event occurs in the sky,” says Nobel laureate Barry Barish, who is a former LIGO director.

Challenges in sight

Compared to GECAM, some projects are facing financial and logistical uncertainties as they navigate longer timelines. Funding delays have already impacted some major physics research infrastructure under construction in China. In May, the National Development and Reform Commission notified IHEP director Yifang Wang that about half of this year’s grant for the planned High Energy Photon Source will not be given out until next year or later. That will leave a hole of 300 million yuan ($42m) in the contract for construction of the facility that is being built on the outskirts of Beijing.

With a total investment of 4.8 billion yuan (nearly $700m), the High Energy Photon Source is one of the most expensive research facilities China has approved and the delay will increase the project’s cost. When it opens in 2025, it will be one of the world’s most powerful multi-purpose machines to probe the inner structures of materials. “It’s totally understandable if the government feels financially strained under such circumstances,” says Wang.

Wang suggests that one possible solution is to borrow “idle money” from other projects and return it later. However, the team must first secure a set of government permissions. “If we can get a green light on this, the photon source project will probably pull through,” says Wang. He adds that most teams in China may face tighter budgets for the rest of 2020 and possibly beyond, but that the “entire physics community believes the difficulties are temporary and will be overcome”.

With scientific evidence, we answered the question of how the world will see us – and how we should see ourselves – when the pandemic is over

Tao Wang

Other concerns face Chinese projects that involve international contributions. The Jiangmen Underground Neutrino Observatory (JUNO) that is being built in southern China will aim to measure the mass hierarchies of neutrinos when it comes online by the end of 2021. Researchers expect to receive a liquid scintillator purification system from Italian partners this September, but that could now be delayed. “If the international travel bans remain in place by that time, our team will not be able to go to China and install those plants,” says Gioacchino Ranucci from the National Institute for Nuclear Physics in Milan, Italy, who is the European co-ordinator of the JUNO consortium. Ranucci notes that its supplier, an Italian company called Polaris, has worked hard to keep up with the schedule even during the lockdown. Indeed, Ranucci and his coworkers are now looking for ways to travel to the JUNO site. “We are considering applying for a special permit [from the government],” he says.

But many international projects involving China will almost certainly experience delays. The Space Variable Objects Monitor (SVOM) – a telescope jointly developed by France and China to detect gamma-ray bursts from the most distant explosions of stars – is now running late by at least five months. In January scientists from the two countries completed the integration and environmental testing of the prototype satellite in Shanghai. They were supposed to meet in Chengdu after that and finish the review for the test. Instead, the team finished the review online during the lockdown.

The launch date has slipped from late 2021 to the first half of 2022, says Bertrand Cordier of the Saclay Nuclear Research Centre in France, who co-ordinated French involvement in SVOM. Now the challenge is building the satellite and the team has encountered delays from suppliers and from their inability to go back to the lab to assemble those parts. “For the first part of the flight model test, I’m afraid our Chinese colleagues may need to do it on their own with remote support from us here in France,” says Cordier.

The new “normal”

On 8 April materials scientist Tao Wang was back on the campus of the Wuhan University of Technology for the first time since the coronavirus hit his city. After 76 days of lockdown, citizens of Wuhan were allowed to leave their homes and Wang made a quick trip to the office. His building was quiet and empty, and his desk and laboratory equipment were covered in dust. Only two pot plants had managed to survive. Wang and his colleagues were not ready to restart experiments, so they used April and May to catch up with university paperwork due at this time every year. They also finished online interviews with nearly 1000 incoming graduate students.

Wang was relieved that Wuhan, the first and one of the hardest-hit epicentres of the pandemic, seemed to be bouncing back quickly. Citywide testing in the second half of May revealed 300 infections – all asymptomatic – among its 10 million residents. “With scientific evidence, we answered the question of how the world will see us – and how we should see ourselves – when the pandemic is over,” says Wang.

On 8 June five of Wang’s graduate students were back in his lab. “It’s been five months….we are all thrilled,” says Hui Wang, a PhD student from Wuhan. “The campus is clean and calm as ever, as if we had never been away.” Hui Wang plans to work at the lab throughout the summer and is eager to carry out experiments on perovskite solar cells to test numerical simulations she did during the lockdown. She also says that the lockdown gave her time to think about new research ideas.

Back in Beijing, Huiqian Luo is still waiting for an official permit for his students to return to the lab – a step that might face further delay because of the June cluster of cases that were linked with a wholesale food market in the south of the city. For now, his group can remotely conduct experiments at neutron sources in Japan, France and the UK. “It’s a good way to engage with students when travel is not free,” says Luo. “But I really look forward to seeing them soon.”

COVID-19 pandemic has made weather forecasts less reliable

The grounding of commercial air flights during the COVID-19 pandemic has made weather forecasts less reliable. As well as affecting short-term forecasts, the reduction in aircraft weather observations has impacted longer-term forecasts and could handicap early warnings of extreme weather, warns Ying Chen, an environmental scientist at Lancaster University in the UK. This could affect predictions of monsoons and hurricanes latter this year, Chen says, leading to additional economic damage from the coronavirus pandemic.

Commercial aircraft are a critical component of meteorological observations, collecting data on wind, air pressure, temperature and humidity as they traverse the globe. But the availability of these observations has reduced dramatically due to the global coronavirus lockdown. More than 20 commercial airlines had grounded all flights by the end of March, and around 12 had stopped all international flights. According to the World Meteorological Organization this led to a loss of 50‐75% of aircraft weather observations between March and May.

When Chen analysed weather forecasts in March, April and May 2020 compared with the same period in the previous three years, he found a significant drop in the accuracy of predictions of surface temperature, relative humidity, air pressure and wind speed. The study, described in Geophysical Research Letters, reveals a fall in the accuracy of short-term, 1-3 day forecasts, particularly over remote areas and regions that are usual very flight heavy, and a larger deterioration in longer‐term, 4-8 day forecasts.

Significant drop in accuracy

In February 2020 weather forecasts were more accurate than in 2017, 2018 and 2019, and that improvement could have continued if aircraft observations had carried on as usual. Instead, the accuracy of surface temperature forecasts over Greenland and Siberia reduced by up to 2 °C, and predictions of surface wind speed and pressure deteriorated as forecasts extended. Forecasts in North America, southeast China and Australia have also been greatly affected. However, western Europe’s dense network of around 1500 meteorological stations appears to have lessened the impact in the region, despite a dramatic reduction in normally heavy flight traffic.

Forecasts in the northern hemisphere have been greater effected, as air traffic is usually heavier than in the southern hemisphere.

Chen warns that as the pandemic develops the lack of aircraft observations may become more severe, leading to a further drop in forecast accuracy. The error could become very large for longer-term forecasts, and Chen told Physics World that tropical storms need to be watch closely. “My study shows less reliable pressure forecasts, this could impact the forecast of monsoon and hurricane development,” he explains.

The loss in meteorological data could affect climate change monitoring. “The question is how large the impact on future climate analysis will be,” Chen says. “This I cannot tell, it will depend on how much longer we have to wait to return to normal.”

To tackle the dearth of measurements, Chen says more ground-based and balloon observations need to be introduced, as “these can buffer the impacts to some extent”. He adds that Europe has already started to do this.

Copyright © 2025 by IOP Publishing Ltd and individual contributors