Skip to main content

The search for the missing plastic

The problem with marine plastic pollution is that there’s just not enough of it. We obviously don’t want there to be more, it’s just that the quantities of plastic we know to be floating in the world’s oceans fall far short of the amount that scientists say should be there.

Take the world’s best-known concentration of ocean plastic, the Great Pacific Garbage Patch. This is an accumulation of floating debris encircled by the North Pacific Gyre (a system of ocean currents that circulates between North America and Asia). It’s estimated to contain a staggering 80,000 tonnes – or 1.8 trillion pieces – of plastic. Similar gyres hold smaller collections in the South Pacific, the Indian Ocean, and the North and South Atlantic too. All in all, the total mass of known plastic floating on or near the surface of our oceans exceeds 250,000 tonnes (figure 1). But that’s not enough.

Since we started releasing plastic into the sea in the 1950s, the amount emitted annually has risen to something like 10 million tonnes. Most of the material entering the oceans nowadays is less dense than water, which means that if it survives for longer than a few years, we should expect to find tens of millions of tonnes of plastic floating at the surface. That’s orders of magnitude more than we actually see – so where’s it all gone?

In the Great Pacific Garbage Patch many pieces are old, and items made recently are poorly represented. Floating objects typically seem to take several years to make their way from a coast or river where they enter the sea, to one of the subtropical gyres where they linger long-term. Most pieces never finish that journey (or at least, they haven’t yet), but where they end up and how they get there are still unknown.

figure 1

“It’s not missing like dark matter is missing; it’s a different kind of missing,” says Erik van Sebille, an oceanographer and climate scientist at Utrecht University in the Netherlands. “Nobody knows how much is at the seafloor versus how much is on beaches versus how much is ingested by animals versus how much of it is already degraded by bacteria. And that is the puzzle.”

Wherever we do find plastic pollution in the environment, we also see its effect on wildlife, from turtles entangled in discarded fishing nets, to starving sea birds with stomachs full of bottle caps and other indigestible detritus. To quantify how widespread such effects are – an essential step if we want to mitigate them – we need a much better picture of how marine plastic is distributed, and how it moves around in the years and decades after its release. This is also necessary if we’re to have any success in cleaning up the oceans, as collection efforts that focus exclusively on the most prominent 1% of material might have little effect.

Modelling the journey

Many oceanographers think the best way to produce a detailed map of marine plastic is through modelling. The Tracking of Plastic in Our Seas (TOPIOS) project – funded by the European Research Council and led by van Sebille – is an example of such an effort. Currently, the TOPIOS model integrates tides, ocean currents and the wind to predict the paths taken by objects floating at the surface. When plastic sinks down into the ocean, it’s removed from the model and is no longer tracked. Over the course of its five-year programme, which started in April 2017, the TOPIOS researchers expect to broaden the model by building a comprehensive, high-resolution version that captures every relevant process in all three dimensions. When complete, the simulation will take into account not only the obvious oceanographic influences, but more subtle considerations too.

One such factor is the role played by marine microbes. The longer a piece of plastic remains in the ocean, the more living organisms it accumulates. This “biofouling” process gradually increases the overall density of the object-plus-hangers-on, until eventually the whole thing sinks into the depths. But different microbes populate different parts of the ocean, and not all affect floating plastic equally. The difference between an object reaching a subtropical gyre or being deposited on the seabed a thousand kilometres away could come down to which species set up home on its surface.

Another important parameter that varies from place to place is how bright the sunshine is. Exposed to ultraviolet light for long enough, plastic photo-oxidizes, eventually becoming brittle and breaking into fragments. The resulting microplastic particles can be dispersed throughout the water column, where they enter an entirely different transport regime.

Even with such a detailed model, however, the TOPIOS team will still need some empirical data of its own to feed into the system. The ultimate goal is to power the model with a Bayesian inference algorithm, using one set of observations of plastic to train it and another set to validate it.

Searching from space

Unfortunately, those observations are not easy to come by. For one thing, the sheer size of the oceans makes taking representative measurements difficult and expensive. Another problem is that the distribution of plastic is heterogeneous even on a small scale: samples a kilometre apart can differ in plastic concentration by an order of magnitude, says van Sebille. “It’s a bit like an astronomer pointing their telescope into the sky 100 times and then having to say what the structure of the universe is like.”

For some, satellites provide the obvious solution to this problem, since it’s only from space that you can see the big picture. However, no satellites currently in orbit are dedicated to the task of observing marine plastic – nor are any on the drawing board. The technology and techniques necessary for the remote sensing of plastic are still at the proof-of-concept stage.

One person studying the problem is Paolo Corradi, a systems engineer at the European Space Agency (ESA), based at its European Space Research and Technology Centre in the Netherlands. Together with a large, international collaboration, Corradi envisages how space-based measurements would fit into an integrated marine debris observing system (IMDOS) that also comprises airborne techniques and in situ measurements (Front. Mar. Sci. 10.3389/fmars.2019.00447).

Among the remote-sensing methods under consideration, the least well studied for marine-plastic detection is probably radar. At first glance, this technique has a lot going for it, as radar can be used day and night, whether the skies are clear or cloudy. Satellite-based radar systems can also exploit an effect by which the transceiver’s motion along its orbital path mimics a much larger static antenna. The size of this “synthetic aperture” is equal to the distance the satellite travels in the interval between the emission of a pulse and the detection of its echo. The size of the aperture determines the radar’s spatial resolution, allowing satellite-based systems to achieve resolutions on the order of a metre – fine enough to spot large individual pieces or dense accumulations of debris as long as they are sufficiently reflective. However, it’s not known whether synthetic-aperture radar (SAR) will work for plastic, as the signal from such a low-dielectric-constant material might well be swamped by backscatter from ocean waves. “This is a question that keeps buzzing in my head and doesn’t let me sleep some nights,” says Armando Marino of the University of Stirling, UK, where a forthcoming project will test the method using ground-based radar.

Where satellite radar systems might help the most is in locating plastic aggregations indirectly. A variation of the SAR technique uses interferometry to measure the way water moves at the ocean surface. This method compares the phase of radar signals measured at different points to retrieve information about wave height and velocity, as well as larger scale features like surface currents. As floating objects tend to concentrate at the boundaries between differently moving masses of water, such satellite-based radar could determine where plastic is likely to build up, providing key information for transport models.

And that’s not all that wave measurements might reveal. On SAR images of ocean gyres, Marino and colleagues have noticed areas of unusually smooth sea-surface texture, which they attributed to the presence of surfactants. Like oil poured on troubled waters, surfactants dampen waves at the surface, decreasing the strength of the radar signal. Knowing that such molecules are produced by microbial activity, and that marine plastic is colonized rapidly by microbes, the researchers propose that where such calm patches follow the pattern of ocean-circulation features, they could indicate the presence of high concentrations of microplastic pollution.

Another promising active-sensing technique is lidar. Laser range-finders – which determine distance by measuring the travel time of a laser pulse – are already commonplace on Earth-observation satellites, where they measure clouds and other atmospheric particles, wind speed, and the thickness of the ice caps. No sensors have been launched yet to explicitly study marine plastic, but some researchers have used a lidar system designed for atmospheric aerosols to look for plankton and other suspended particles (2013 Geophys. Res. Lett. 40 4355). Although the repurposed instrument offered only a coarse vertical resolution below the water’s surface, it was sensitive enough to detect particles floating in the top 20 m or so of the ocean. Since the concentration of plastic is thought to drop off exponentially more than 5 m below the surface, an appropriately optimized instrument could be an effective tool for measuring the quantity of microplastic fragments suspended in the water column.

Corradi suggests that one day lidar techniques might even be able to discriminate between plastic and other particles by detecting inelastic processes such as fluorescence or Raman scattering. He cautions, however, that the signal from the latter would be so faint that current instruments would struggle to detect it from orbit.

Shining a light on plastic

In the quest to pinpoint ocean plastic, the techniques that have been most thoroughly investigated so far are passive optical remote-sensing methods that measure reflected sunlight. Like every material, plastic imprints a characteristic spectral profile on light that scatters from its surface. Cutting-edge recycling centres already use this effect, identifying different types of plastic based on how bright they appear at certain frequencies in the infrared part of the spectrum.

An orbital application of a similar technique was demonstrated recently by Lauren Biermann, an Earth-observation scientist at Plymouth Marine Laboratory (PML) in the UK. Together with colleagues at PML and the University of the Aegean in Mytilene, Greece, Biermann used multispectral images captured by ESA’s Sentinel-2 mission – a twin-satellite system that measures vegetation and land use at high resolution across 13 spectral bands from a mean altitude of 786 km (figure 2).

figure 2

The sensors on these satellites don’t have the spectral resolution necessary to discriminate between different types of plastic, as happens in recycling plants – and even if they did, the intervening atmosphere would mask some of the material’s narrow spectral features. Instead, Biermann has developed a way to identify plastic based on the material’s reflectance in three broad bands that are more easily measurable from orbit: one centred at the far-red end of the visible spectrum; one just beyond the visible range in the near-infrared (NIR); and one in the shortwave infrared (SWIR). While water absorbs strongly across all of these bands, plastic has a sharp reflectance peak in the NIR, making floating plastic objects stand out brightly against the dark ocean surface in NIR images.

Unfortunately, water’s strong absorption in the infrared also means that plastic becomes effectively invisible once it dips even a few millimetres below the waves. This passive technique can therefore only detect debris bobbing at the surface and cannot build up an inventory of plastic pollution suspended in the water column. Another challenge is that seaweed, driftwood and even sea foam all have the same sharp reflectance peak in the NIR that plastic exhibits – and, like floating plastic, they all tend to gather along ocean fronts and on beaches. So to distinguish plastic from other marine debris you have to look to less obvious spectral markers in other parts of the electromagnetic spectrum.

With this in mind, Biermann and colleagues have used satellite images to compile a spectral catalogue of material types. Recent well-documented blooms of sargassum in the Caribbean provided the spectral touchstone for floating vegetation, for example. A flood in Durban, South Africa, meanwhile, washed masses of bottles and other debris into the Indian Ocean, revealing the spectral properties of impure aggregations of plastic. “Because I’m South African, I just reached out to people I knew and asked for photographs of the harbour,” says Biermann. “I wanted to see how certain I was that this was plastic and – oh my God! – it was like a landfill had just been washed clean into the sea.”

When they had built up a library of spectral signatures, the team used it to train a machine-learning algorithm – based on Bayesian inference, like the TOPIOS model – to recognize plastic and other floating materials automatically. When tested on another set of aggregations whose compositions were verified independently, the algorithm identified plastic with an accuracy of 86%.

As promising as these results are, they were achieved with instruments that were not designed with this purpose in mind. Biermann says that the sensors used on the Sentinel-2 satellites are not quite as sensitive as she would like, and they lack bands at wavelengths where additional measurements would be useful. Most important, perhaps, is the fact that the satellites’ cameras have a spatial resolution of 10 m at best, meaning the only plastic items they can possibly observe are those that have coalesced into floating rafts large enough to fill a significant fraction of a 100 m2 pixel – specifically, around 30% for plastic bottles or bags, and 50% for discarded fishing nets. (Based on ESA studies, Corradi reports that a 1% pixel coverage is probably the theoretical limit for sensors like those carried by Sentinel-2.) Whatever the exact figure, it’s clear that cameras currently in orbit are unlikely to be much use when it comes to more widely distributed collections of plastic such as those amassed by the subtropical gyres. Biermann hasn’t, for example, been able to use her method on the Great Pacific Garbage Patch because Sentinel-2 captures images of the land and coastal waters only, which underlines how existing satellites are not quite fit for purpose.

Closer to Earth

But if low-Earth orbit is simply too high for such instruments to spot individual plastic objects, why not bring the instruments lower – say, to 400 m? This is the altitude from which the non-profit organization Ocean Cleanup surveyed the Great Pacific Garbage Patch. Extending from the hold of a C-130 cargo aircraft, a hyperspectral imager acquired images in visible and infrared light while a laser was used to build up lidar profiles.

The high spectral resolution of the infrared sensor, along with the light’s shorter path through the atmosphere, meant that the Ocean Cleanup team had more freedom to choose which bands to use than Biermann did with her satellite-based technique. Furthermore, with a survey height of 400 m, each image pixel was around 1 m2. Using two narrow spectral features in the SWIR, they found that floating plastic had to fill only about 5% of a pixel to be detectable, meaning the researchers could theoretically spot individual plastic items just a few centimetres across.

Aerial expedition

Being so low down doesn’t make the ocean any more transparent to infrared radiation, however, so the SWIR signal could still only detect debris bobbing at the surface. This is where the lidar came in, but this time not to look for suspended microplastics. Instead, the researchers used it to measure how far large aggregations – such as those that coalesce around drifting fishing nets – extend below the surface. “Ghost nets are like magnets for other debris, so they end up being almost a solid floating mass of debris,” says expedition member Jen Aitken, of Teledyne Optech in the US. “We managed to find some aggregations of plastic where the lidar was able to penetrate two or three metres, enough to make a rough model of what it looked like in 3D.”

Detailed observations like these are unlikely to be made as routinely as satellite measurements, however. Whereas a satellite passes over the same spot every few days, whether or not you decide to download its data, each airborne survey of the remote ocean takes huge effort. Aitken describes Ocean Cleanup flights – in aircraft packed with extra fuel tanks – as being dominated by the time taken to go from California out into the Pacific and back. Indeed, an excursion that might last 10 hours or more, may only spend a couple of hours taking data.

Conducting the surveys with drones is an alternative, Aitken suggests, but as these only spend an hour or so in the air, they would still need to use a large and costly ship as a base of operations. That might be feasible given that Ocean Cleanup’s plans already call for ships to pick up recovered plastic pollution. The above-mentioned IMDOS concept also includes them in its mix of techniques. “Satellites will be just part of the monitoring solution,” says Corradi. “You need airborne, you need drones, you need in situ measurements and boat measurements.”

Until such a programme exists, what conclusions can researchers draw from the data currently available? Different models suggest different distributions. At the start of the TOPIOS project, van Sebille and colleagues proposed that the reason we see so little floating plastic is because it spends only a short time at the surface before fragmenting and sinking. In that case, the missing material is distributed throughout the depths and across the ocean floor. Researchers with Ocean Cleanup, on the other hand, say that the age of typical plastic fragments found in the subtropical gyres rules this out (2019 Sci. Rep. 9 12922). If they are right, plastic debris spends years cycling between beaches and coastal waters, only reaching the offshore environment long after its emission.

Whichever model is true, there are implications for mitigation and remediation efforts. If most plastic has already broken up and spread through the water column, we can forget about retrieving it. Instead, we should concentrate on stopping further emissions at their source, while also investigating the impact of plastic pollution on deep-water ecosystems. If plastic survives for years, however, the bulk of current microplastics are from objects released decades ago. In that case, today’s remediation programmes might prevent harm far in the future.

Defect spins slow diamond motion

The force from the spin of defects known as nitrogen vacancy (NV) centres can be used to cool down a macroscopic diamond particle. This “spin cooling” method, which has been demonstrated for the first time by a team of researchers from the Ecole Normale Supérieure in France, is conceptually similar to the laser cooling of atoms, in which the radiation pressure exerted by laser photons dramatically reduces the speed of trapped atoms and thus their temperature. The new technique might therefore be as promising for future applications as laser cooling was, says Gabriel Hétet, who led the study.

In their experiments, the researchers used a handful of nitrogen atoms to cool a trapped diamond crystal made up of billions of carbon atoms. The nitrogen atoms were present in the form of impurities that arise when adjacent carbon atoms in the diamond lattice are replaced by a nitrogen (N) atom and an empty lattice site, or vacancy (V). The resulting NV centre is an atom-like system with well-defined electronic spin properties, and it can be optically polarized so that one spin direction is more prevalent than the other. Another important property is that an NV centre’s spin is shielded from its surroundings for relatively long periods – meaning that it has good coherence.

Spin-mechanical coupling effect

Hétet and colleagues say they succeeded in using the spin of many NV centres to affect the orientation of their trapped diamond crystal. Since the torque applied by the NV centre’s electronic spin is delayed with respect to the motion of the diamond particle – something that is possible because of the long coherence lifetime of the NV centres – this torque can cool down the motion of the particle, and thus reduce its temperature. This spin-mechanical cooling effect is similar to the way in which electrons orbiting around atoms are used to cool the motion of the atoms themselves in some laser-cooling protocols.

Strong coupling between individual quantum systems (such as spins) and mechanical oscillators is currently a hot topic in quantum physics research, Hétet says. However, while the mechanical motion of these oscillators can be “read out” using spin systems and vice versa, no one had ever succeeded in cooling a macroscopic object using NV centres until now.

High-precision torque sensing and other applications

The diamond – which is levitated using electric field gradients in a vacuum – can operate as a “compass”, Hétet explains, and could thus be used in high-precision torque sensing applications. He adds that the group’s method could also make it possible to read out the spin of the NV centres in a non-destructive way under ambient conditions, as well as engineer the entanglement (or correlation) between several individual spins and in techniques like matter-wave interferometry. 

The researchers, who report their work in Nature, say that they will now push the oscillation frequency of the diamond particle beyond the decoherence rate of the NV spins. “This strategy will enable us to reach the so-called resolved sideband regime at which the above-mentioned applications will be possible,” Hétet tells Physics World.

Astronomers discover nearest black hole to Earth – so far

Astronomers have found the nearest black hole to Earth known to date – situated just 1000 light-years away.

The newly-discovered black hole is part of a triple system that includes an inner binary – one star and its unseen companion in a circular orbit – plus another star in a wider orbit. Located in the constellation of Telescopium, this triple system is so close to us that its two stars can be viewed from the southern hemisphere with the naked eye.

The team originally thought that the system, known as HR 6819, only contained the two stars. However, observations from the MPG/ESO 2.2-metre telescope at the European Southern Observatory’s La Silla Observatory revealed evidence for a third, invisible, object: a black hole. The researchers detected the black hole and calculated its mass by studying the orbit of the star in the inner pair. They found that the objects in this inner pair have roughly the same mass, with the star orbiting the black hole every 40 days.

Unlike most other black holes found in our galaxy (a couple of dozen to date), the hidden black hole in HR 6819 is one of the very first stellar-mass black holes found that does not interact violently with its environment and, therefore, appears truly black.

Scientists estimate that many more stars in the Milky Way’s lifetime will have collapsed into black holes as they ended their lives. The discovery of this silent, invisible black hole in HR 6819 provides clues about where these other hidden black holes may be. The team say that, now they know what to look for, many more similar black holes could be found in the future.

The findings were published today in Astronomy & Astrophysics.

MRI reveals long-term effects of space travel on the brain

Space travel causes long-lasting changes in the brain’s white matter volume and the shape of the pituitary gland, US researchers report after a longitudinal study of 11 astronauts. The findings may have health implications for the crew of future, long-haul space missions – and may help shine light on related conditions back on Earth.

Following expeditions to the International Space Station, more than half of astronauts reported experiencing changes to their vision – most commonly in the development of farsightedness, but also manifesting as mild headaches and, in severe cases, as a loss in both near and distant acuity. Examination of patients with this “spaceflight associated neuro-ocular syndrome” has revealed a number of structural changes, including swelling of the optic nerve and retinal haemorrhage.

“When you’re in microgravity, fluid such as your venous blood no longer pools toward your lower extremities, but redistributes headward,” explains Larry Kramer, a radiologist from the University of Texas. “That movement of fluid toward your head may be one of the mechanisms causing changes we are observing in the eye and intracranial compartment.”

In their study, Kramer and colleagues conducted a series of brain MRI scans on 11 astronauts – 10 men and one woman – prior to and after missions to the International Space Station. The team found that significant time spent in the orbiting laboratory’s microgravity environment caused the crew’s brain and cerebrospinal fluid volumes to expand – an effect that was found to persist at least a year later.

MR images

“What we’ve identified that no one has really identified before is that that there is a significant increase of volume in the brain’s white matter from pre-flight to post-flight,” Kramer says. “White matter expansion is in fact responsible for the largest increase in combined brain and cerebrospinal fluid volumes post-flight.”

The researchers also observed changes in the shape of the pituitary gland in many of the test subjects following their spaceflights – with the gland becoming smaller and flatter.

A further change was identified in the rate at which cerebrospinal fluid flows through a narrow channel in the brain called the cerebral aqueduct, which connects the brain’s four ventricles. A similar phenomenon is seen in a condition called normal pressure hydrocephalus – in which fluid builds up in the ventricles – which causes symptoms including dementia, difficulties walking and bladder control problems. The astronauts, however, are not exhibiting these symptoms – although the team suspect that both conditions may share a similar mechanism of injury.

Regardless, according to Kramer, it is not clear whether the astronauts’ intracranial changes might be reversible. “The best solution is probably prevention,” he concludes.

“Currently we do not understand the clinical significance, if any, of these changes in brain structure, which highlights the need for further research,” comments Donna Roberts, a radiologist from the Medical University of South Carolina, who was not involved in the present study. “Determining the cause of the spaceflight-associated neuro-ocular syndrome will be important in ensuring the health of our astronauts on the International Space Station and for planning longer-duration missions, such as a crewed mission to Mars.”

“Spaceflight-associated neuro-ocular syndrome remains among NASA’s top health risks for long-duration spaceflight,” agrees physiologist Damian Bailey of the University of South Wales. The study “lends additional support to the evolving albeit controversial hypothesis that [its] symptoms are related to chronic intracranial hypertension,” he adds, noting that the findings are not only relevant for astronauts, but may also help inform the clinical management of hospital patients suffering from related symptoms.”

With their initial study complete, the researchers are now moving to investigate potential countermeasures that might be used to prevent intracranial changes during spaceflight – such as creating artificial gravity using a large centrifuge, or applying negative pressure to the lower extremities to counteract the headward shift of fluids under microgravity.

“We will be using head-down tilt bedrest studies as an analogue of fluid shift seen in microgravity,” Kramer says, explaining that these conditions can reproduce the same intracranial and orbital changes. “If countermeasures work in the analogue studies they may potentially work in microgravity.”

The research is described in the journal Radiology.

Ultracold atomic comagnetometer joins the search for dark matter

A new atomic comagnetometer that could be used to detect hypothetical dark matter particles called axions has been created by physicists in Spain. The sensor uses two different quantum states of ultracold rubidium atoms to cancel out the effect of ambient magnetic fields, allowing physicists to focus on exotic spin-dependent interactions that may involve axions.

Dark matter is a mysterious substance that appears to account for about 85% of the matter in the universe – the other 15% being normal matter such as atoms and molecules. While myriad astrophysical observations point to the existence of dark matter, physicists have very little understanding of its precise nature.

Some dark matter could comprise hypothetical particles called axions, which were first proposed in the 1970s to solve a problem in quantum chromodynamics. If dark matter axions do exist, they could mediate exotic interactions between quantum-mechanical spins – in analogy to how photons mediate conventional magnetic interactions between spins.

Two detectors

These exotic interactions would be weak, but in principle they could be measured using an atomic comagnetometer, which comprises two different magnetic-field detectors that are in the same place. The device is set so that the effects of ambient magnetic fields in the two detectors can be cancelled out. So, a residual signal in the comagnetometer could be the result of an exotic interaction between atomic spins within the detector itself.

The new comagnetometer was created at the Institute of Photonic Sciences in Barcelona by Pau Gomez, Ferran Martin, Chiara Mazzinghi, Daniel Benedicto Orenes, Silvana Palacios and Morgan Mitchell. The two different detectors are rubidium-87 atoms that are in two different spin states that respond in different ways to magnetic fields.

Near absolute zero

The atoms are in a gas that is chilled to near absolute zero to create a Bose-Einstein condensate (BEC). In this state the atoms are relatively immune to being jostled about by thermal interactions. This means that for several seconds the spins can respond in a coherent way to spin interactions. The BEC is also very small – just 10 microns in diameter – which boosts its performance as a comagnetometer and means that short-range axion interactions can be probed.

The response of the spins to a magnetic field is measured by firing a polarized of a beam of light at the BEC and measuring how its polarization is rotated. By comparing measurements on the two different spin states, the effect of ambient magnetic fields can be removed, allowing the team to look for any exotic interactions that are affecting the spins.

Although no evidence of axions has been found by the device so far, the team has shown that the comagnetometer is highly immune to noise from ambient magnetic fields. They say that it could be run at a sensitivity on par with other types of comagnetometers that are currently looking for axions. The device has already been used to measure conventional spin interactions between the ultracold atoms and the team says that other potential applications include spin amplification, which could be used to study quantum fluctuations.

The comagnetometer is described in Physical Review Letters.

Physics in the pandemic: ‘A lack of childcare hugely reduces productivity’

In many ways, I’m fortunate. Most of my work is computational, and I have no teaching duties this semester, so I don’t have to access a lab or navigate the new online learning software for undergraduate students.

Instead, my problem comes from trying to complete any work at all while caring for an 11-month-old baby. A baby who is progressing quickly into toddlerhood and is into everything. Gone are the days when I could read a paper or run some code as he played (relatively) quietly with his toys. In their place are the days of “Oh God don’t touch that”, “Please come away from the socket”, “Don’t trap your fingers in that cupboard”, and “Excuse me, where are you going?” as he heads out the door. He also doesn’t sleep through the night – a torture that is, I imagine, painful for people of all professions, but one that has worn very thin indeed for someone who requires full brain function to read up on the mathematics behind stellar magnetic field generation, or analyse data on the mechanical equilibria locations within a potential field of a Sun-like star.

The combination of sleepless nights and running after an 11-month-old human whirlwind is exhausting in ways that seem obvious now, but I didn’t fully appreciate them before. Before lockdown, my husband (who is also a PhD student) and I had childcare two days a week, courtesy of his mother. PhD students are not well paid, and a month of private childcare at four days a week would eat up my entire monthly income. So instead, granny came for two days a week, which gave my husband and I two high-intensity workdays plus three in which we crammed in work around parenting. It worked surprisingly well, thanks to our supportive supervisors, who have been great throughout our parenting journey. Many PhD students won’t have that kind of support, and while I am under no illusions that this lack of support is exclusive to academia, it is nevertheless true that parents in other careers benefit from clearer and more universal protections.

A disrupted equilibrium

Two weeks before the UK’s official lockdown on 23 March, having closely followed news of the virus’ spread since mid-January, we had to suggest that my mother-in-law stop coming. This was hard – not just because we would be left without any childcare, but also because we would miss her. She isn’t particularly at risk, but she lives with someone who is older and another who is vulnerable, so we simply couldn’t ask her to continue.

Then the emails from the university started. Students were asked to leave and go home for the remainder of the semester – in fact, for the foreseeable future. St Andrews is a small town with a large elderly population. Even at the best of times, the healthcare system here can’t support everyone, and the spread of COVID-19 would be completely disastrous. Fortunately, my own parents are not classed as “at risk”, so – hoping to ease the pressure –we moved back to my family home. This brings several advantages: a variety of company, people to lend a hand with babysitting, delicious homecooked meals like the ones I remember from my childhood, the knowledge that if we do become ill, someone will care for “the wee lad”, and the fact that I’ve not looked at a washing machine in weeks (for which I feel very grateful, and also very guilty).

We’re now entering Week 6 of lockdown, and we’ve settled into a new routine. Despite the disruption my son causes, I have grown used to having him near me, and I worry how both of us will cope with the separation anxiety when this all changes. At the same time, I worry that things won’t change – that this is the “new normal”, and that it will be an incredibly long time before he returns to spending time with granny or playgroups. I worry I’m not doing enough work (because I’m not), yet I also worry that I’m not appreciating the time I have for watching my son grow up.

Stumbling blocks

As for my research, the body that funds my PhD recently informed the university that I and the other students it funds will be paid for an extra six months, to allow for the disruption the COVID-19 pandemic is having on our research. I’m still waiting for confirmation, but the prospect of getting some time back is encouraging. However, I think it’s important for academics to appreciate that being shut out of labs and other research facilities isn’t the only stumbling block. Isolation and loneliness are killers to motivation. Something as simple as not being able to find 50% of your shopping list in the supermarket can wear you down. A lack of childcare hugely reduces productivity and research output. Even those of us who can work entirely remotely can’t be expected to continue as normal, even if on the surface it seems like it ought to be possible.

To parents out there: I see you. You are doing great. Survival is the main task; everything else is a bonus

Some days, when I get a quiet moment to engage brain cells for a purpose other than changing a nappy, singing nursery rhymes or navigating around a floor covered in plastic blocks, I wonder what this experience would have been like pre-child. I always enjoyed working from home; with fewer distractions, no commute time and minimal social contact, I was generally more productive. I sometimes miss the days when I could sit and work, uninterrupted, for four hours at a time and rack up a meaningful 10-hour day. Now I manage five hours on a great day, while on a good day I can do three and a half. Some days I manage a whopping 30 minutes. Of course, I love being a mother, but I think it’s natural in times of crisis and general madness to wonder about other potential timelines.

The main task

So, to parents out there: I see you. You are doing great. Survival is the main task; everything else is a bonus. To the PhD students out there, balancing research and parenting: I feel you. You are doing great. Survival is the main task; everything else is a bonus – and yes, that includes your research. Perhaps this is easier for me to say, since my partner is also working from home, I have a supportive manager and only one child, and I have extra hands to help when needed (I’m typing this from the garden to the sound of birdsong, the rustle of leaves, and my crying son being lulled to sleep by my mother). But being a PhD student and having kids is hard at the best of times, because the system isn’t yet set up for it.

Slowly, academia is becoming more inclusive, enabling people to have a career without putting their family life on hold. In the meantime, I’ve found that this difficult balancing act has helped put things into perspective. I’m a recovered perfectionist, someone who no longer wastes time on unnecessary details and has learnt to appreciate life as well as work. Now, with the pandemic, I’ve been trying to accept that whilst research is very important to me, so is my son and I’ll never get this time back to watch him grow up. So I’m trying to enjoy it as best I can – sleepless nights, joyful giggles, dirty nappies, developmental milestones, temper tantrums and all. I hope others can do the same.

Experiment probes landmines’ effects on cells

Fatal and non-fatal injuries caused by landmines and improvised explosive devices (IEDs) are appallingly common, afflicting as many as 20,000 people per year in current and historical conflict zones. Those who survive such blasts typically suffer blast-mediated amputations caused by the high-pressure shock wave and other explosive effects.

In addition to the immediate damage that it causes to the limb, the shock wave also triggers a long-lasting condition that emerges in the weeks and months after the blast. This condition – heterotopic ossification (HO) – is the anomalous formation of bone in soft tissues such as muscles and ligaments, usually in the parts of the limb closest to the site of the initial injury.

“HO is a major problem,” explains David Sory from Imperial College London. “It poses one of the most significant clinical problems to casualties suffering from blast-mediated amputation, as this pathology has a direct impact on rehabilitation and return to functional mobility.”

Despite HO’s prevalence among blast survivors, the details of how mechanical loading of tissues induces the condition are still not known. This is largely because experiments involving living cells have not yet encompassed the extreme stress regime associated with blast trauma. Sory and colleagues sought to fill that gap. By bringing together expertise from the fields of shock physics and stem-cell biology, the team developed a new experimental setup to study the cellular effects of extreme mechanical loading.

The researchers built three separate loading platforms that can subject in vitro cell and tissue samples to a range of stresses, from those caused by everyday activities at the lower end, to the kind of forces caused by landmines and other explosive devices at the higher end. In their first experiments using the new setup, which took place in the Centre for Blast Injury studies at Imperial College, they found that gene activity denoting HO has a complex relationship with loading parameters.

The team adapted existing experimental apparatus so that they could accommodate the variety of cellular samples commonly used in in vitro investigations. These samples need to be maintained in a sterile and biocompatible environment for the duration of the experiment, and to be retrieved intact for study afterwards.

They fulfilled these requirements by encapsulating the samples in hermetically sealed pressure chambers formed from polydimethylsiloxane (PDMS). By controlling the ratio of its components, the researchers tuned the polymer’s acoustic impedance to match that of native tissues, enabling pressure waves to pass efficiently from the apparatus to the cells within the sample.

To span a wide range of strains and strain rates, the researchers integrated the pressure chambers into three separate experimental platforms. The physiological strain-rate regime – representing stresses like those induced by walking, running and jumping – was handled by a modified universal testing machine. A drop-weight rig covered the intermediate regime, and a split-Hopkinson pressure bar provided extreme loading rates like those experienced by tissues near to an exploding landmine or IED.

Split-Hopkinson pressure bar

The team’s first test of their new setup involved mesenchymal stromal cells (MSCs) obtained from rat periosteum, the thin tissue membrane covering the outer surfaces of bones. In some experiments, the researchers suspended these cells in fluid; in others, the cells were held within 3D hydrogel scaffolds. Twenty-four hours after subjecting the samples to a range of strains and strain rates, they measured the expression of a transcription factor called Runx2, which indicates that the cells are primed to differentiate into bone tissue — as happens in HO.

While Runx2 expression was related to the mechanical loading – only intermediate and high strain rates produced an increase, for example – the pattern was not straightforward. Rather, the strain rate, peak strain, strain duration and the nature of the sample (fluid suspension or 3D scaffold) all combined to contribute to priming the cells in a complex way.

In future experiments, Sory and colleagues hope to explore the cellular mechanics of HO more thoroughly using human MSCs in samples that better represent in vivo tissues. Although their ultimate aim is to develop therapies that prevent the onset of HO in blast victims, there could be circumstances where it would be useful to trigger the condition deliberately.

“In fact,” says Sory, “understanding the mechanisms of HO formation could help develop novel therapeutic strategies for bone diseases and to repair bone loss.”

Full details of the research are reported in Physical Biology.

Learning from Black Swan events

On 20 March 2010 Iceland’s Eyjafjallajökull volcano erupted, resulting in a huge ash cloud that hung over northern Europe. More than 100,000 flights were cancelled and 8 million air travellers stranded across the northern hemisphere from Mexico to China. The International Air Transport Association estimated that airlines collectively lost $150m per day in revenues. With disruption persisting for several weeks, total losses ran into billions, with a short-term catastrophic effect on an industry already struggling. The chaos also caused problems for perishable goods sent by air.

Now we have another crisis on our hands. COVID-19 has already led to huge numbers of flights being cancelled. That has not only decimated international business travel, but the disruption to tourism is likely to be far longer and deeper than in 2010. The impact will be greater still on transport of goods by road, rail and sea freight. With liquefied natural gas prices already falling from rising supplies and mild winter weather, demand for shipping – which moves 80% of all the world’s goods – has tanked.

It is likely COVID-19 will prove to be an unexpected “Black Swan” event that will prick what was an over-inflated stock market, with the economic consequences being more serious than after the 2008 economic crash. Black Swan theory was developed by Lebanese-American statistician Nassim Nicholas Taleb to explain high-profile, hard-to-predict and rare events that are beyond the realm of normal expectations in history, science, finance and technology.

Black Swan events might be rare, but they do happen – so why have we not learnt from previous events? After all, over the past few decades, China has experienced two major public-health crises caused by disease outbreaks. In 2003, severe acute respiratory syndrome (SARS) led to almost 800 deaths within six months. Flights were cancelled, schools closed and large mass-gathering events were banned. SARS’ global macroeconomic impact was estimated at $30–100bn – $3–10m per case – with an estimated 1% drop in China’s gross domestic product (GDP).

A decade later, influenza A virus subtype H7N9 hit China and by December 2013 a total of 148 cases of H7N9 avian influenza were confirmed, resulting in 48 deaths. Unlike SARS there was no social chaos and reliable information was released promptly. The biggest economic impact was in the Chinese poultry industry, which suffered a loss of over 40bn RMB ($5.5bn), but there was little economic impact on global maritime transportation. SARS, in particular, highlighted global connectivity and the threat future pandemics present, but COVID-19 is set to have a much greater impact. World stock markets are crashing – down some 30% at the time of writing.

While current efforts are under way to forecast potential domestic and international virus spread, satellite imagery has already shown the tremendous impact it is having on industrial output in China. Correlation of industrial activity with gas output is a relatively new practice, but it can show how levels of economic activity has dropped. Carbon-monoxide output over parts of northern China, for example, has decreased from a peak of 3790 parts per billion (ppb) in late 2019 to an average of 790 ppb in early March. NASA, the European Space Agency and the US Department of Defense all say that they will continue to monitor the impact of COVID-19 on industrial output.

Changes ahead

The lasting economic impact of COVID-19 will be much greater than the immediate loss of trade and tourism resulting from the Icelandic eruption and SARS, but lessons should have been learned from those events about how vulnerable world economies are to breakdown in the global supply chain. For example, immediate travel restrictions should have been imposed at external and internal borders, with prompt recall of overseas citizens receiving mandatory quarantine on entry. Years should have been spent securing more local sourcing of goods where possible. This event may also finally force the Chinese government to ban the the sale of wild animal meat in certain open markets, as this is a known source of trans-species viral infections, or we may store up more of the same for the future, possibly even worse than the current outbreak.

A possible positive unintended consequence of the coronavirus could be the restoration of supply chains to national and regional manufacturing economies. However, since China’s huge population serves as the engine for our global oil-based economy, such a change of focus could prove significant and challenging to implement. Once COVID-19 is effectively defeated by lockdown, and ultimately by a vaccine, it will likely not be “business as usual”, with many changes ahead. The longer-term impacts on oil, trade, share prices and tourism remain hugely unclear.

Laser-cooled YbOH molecules could aid the hunt for new physics

Physicists at Harvard University and Arizona State University in the US have succeeded in laser-cooling YbOH molecules – a crucial first step towards using these molecules to make precision measurements of the electron’s electric dipole moment (eEDM). Their work was augmented by a related effort, carried out by researchers at the California Institute of Technology (Caltech) and Temple University, to enhance the brightness of a beam of cold YbOH. The results appear in separate papers in New Journal of Physics, which (like Physics World) is published by IOP Publishing. 

Here, two members of the PolyEDM collaboration, Harvard PhD student Ben Augenbraun and Caltech PhD student Arian Jadbabaie, describe the project’s goals and achievements.

What was the motivation for the research?

Despite its many successes, the Standard Model of particle physics is an incomplete theory. Among other deficiencies, it explains neither the nature of dark matter nor why our universe is full of matter but not antimatter. These mysteries could be solved by the existence of new particles and interactions beyond the Standard Model (BSM). Despite many ongoing searches, no sign of BSM physics has been observed in the laboratory. 

If studied at high enough precision, “common” particles, such as electrons, may exhibit minute – but measurable – effects from “new” interactions and particles (those that exist in nature but have not yet been seen in the lab). Some of the best constraints on BSM physics have come from measurements of electrons in molecules that are a few degrees above absolute zero. Extending the current state-of-the-art requires probing large numbers (millions) of molecules over very long times (several seconds) – a task that calls for trapped, ultracold molecules at microkelvin temperatures. Our research tackles the challenges associated with both producing these molecules in large quantities and using a laser to reduce their temperature by many orders of magnitude.

What did you do?

We demonstrated laser cooling of YbOH, a heavy, polyatomic molecule with high sensitivity to BSM physics. We also showed that laser light can be used to enhance the chemical reactions that produce YbOH. 

Using a combination of precisely tuned lasers and magnetic fields, we demonstrated laser cooling of a beam of gaseous YbOH in one dimension to temperatures as low as several microkelvin. Because of the complex vibrational motions in YbOH, we needed to apply many lasers in order to prevent loss of molecules to vibrational levels that did not “see” the other laser wavelengths. In this way, we forced the molecules to absorb and re-emit hundreds of photons, exerting large forces on them using the momentum of the photon to cool the molecules. 

Before laser cooling, we produce cold YbOH in a beam using chemical reactions between atomic Yb and OH-containing molecules (for example water or methanol). We were able to make these chemical reactions 10 times more efficient by using laser light (at a different wavelength than is used for the cooling) to excite the Yb atoms to a long-lived state. Our corresponding theoretical computations show that this excited Yb state has enough energy to overcome reaction barriers. 

What was the most interesting or important finding?

YbOH is one of the most massive small molecules and has one of the most complex structures among laser-cooled molecules. Showing for the first time that simple techniques could be extended even to YbOH is an important step forward. The laser cooling is extremely rapid, taking a fraction of a millisecond and requiring the absorption and re-emission of only hundreds of laser photons. The efficiency of this cooling is a very promising sign for extending it in future experiments. 

When using excited-state Yb, the chemical reactions producing YbOH are exothermic, with extra energy available that can heat the molecules. However, we perform the excitation and reactions in a cold environment full of helium gas. We found collisions with the helium effectively thermalize the additional molecules formed by the exothermic reaction. This is particularly useful for laser cooling, which requires a starting point of cold, slow molecules.

Why is this research significant?

Laser cooling of YbOH demonstrates that increasingly more complex polyatomic molecules can be brought under control, including at the single quantum state level. At ultracold temperatures, their additional complexity then becomes a feature, allowing for novel applications in precision measurement and beyond. 

Furthermore, the laser-based chemical enhancement we demonstrate can be applied to other reactions producing a variety of interesting molecules at low temperatures. Corresponding computations could help identify the optimal reactants for molecular production. Chemical enhancement can be a significant advantage for many experiments requiring cold molecules. 

Together, the combination of chemical enhancement and laser cooling will make possible measurements on trapped molecules that would search for BSM physics at scales far beyond those achievable by current particle collider technology. We expect this to extend the current reach of such experiments by several orders of magnitude. 

What will you do next?

The chemical enhancement could be further increased by optimizing the reactants involved and completely filling the reaction volume with the “catalysing” laser light.  

For laser cooling, the next step is to extend the cooling to 3D, which means cooling and confining the molecules to a small spatial volume – holding them in a trap of laser light. This requires laser slowing and the application of more lasers to extend the cooling force to that of nearly 10,000 photon “kicks”, up from the several hundred demonstrated presently. Once trapped, the molecules can be studied over many seconds, allowing their internal properties to be controlled and tracked with exquisite precision.  

The full results of both studies are reported in New Journal of Physics.

Diamond nanothreads could beat batteries for energy storage, theoretical study suggests

Computational and theoretical studies of diamond-like carbon nanothreads suggest that they could provide an alternative to batteries by storing energy in a strained mechanical system. The team behind the research says that nanothread devices could power electronics and help with the shift towards renewable sources of energy.

The traditional go-to device for energy storage is the electrochemical battery, which predates even the widespread use of electricity. Despite centuries of technological progress and near ubiquitous use, batteries remain prone to the same inefficiencies and hazards as any device based on chemical reactions – sluggish reactions in the cold, the danger of explosion in the heat and the risk of toxic chemical leakages.

Another way of storing energy is to strain a material that then releases energy as it returns to its unstrained state. The strain could be linear like stretching and then launching a rubber band from your finger; or twisted, like a wind-up clock or toy. Over a decade ago, theoretical work done by researchers at the Massachusetts Institute of Technology suggested that strained chords made from carbon nanotubes could achieve impressive energy-storage densities, on account of the material’s unique  mechanical properties.

Outperforms carbon nanotubes

Now, a new theoretical study by a team including Haifei Zhan, Gang Zhang and Yuantong Gu at Queensland University of Technology in Australia and the Agency for Science, Technology and Research (A*STAR) in Singapore reveals there may be circumstances in which bundles of carbon nanothreads outperform carbon nanotube bundles in terms of energy storage.

“We expected a good mechanical energy storage capability [for carbon nanothreads],” says Zhang of their results. “But surprisingly we found its energy density can be up to three times the lithium-ion battery in theory.”

First described in 2015,  nanothreads joined a catalogue of carbon nanomaterials that have emerged over the past four decades. Nanothreads are 1D structures with carbon atoms linked by single bonds (like those in diamond) to three other carbon atoms and a hydrogen atom. Where the hydrogen atom is missing, the carbon atom may bond to a fourth carbon atom in an adjacent thread. This bonding contrasts with the hexagonal carbon lattices found in buckyballs, carbon nanotubes and graphene. In these materials, electron orbitals from each carbon atom are shared between just three other carbon atoms.

Space elevator

Since 2015 studies have revealed several ways that carbon atoms can arrange themselves in a 1D carbon nanothread structure. Furthermore, several threads can be bundled together to create a chord with mechanical properties that are comparable to carbon nanotube bundles — which are so strong it was once proposed they could tether extraterrestrial objects from Earth to create a “space elevator”.

“The excellent mechanical properties of fibres of carbon nanothreads make them appealing alternative building blocks for energy storage devices, to power advanced microdevices and systems,” explains Zhang.

The team used molecular dynamics simulations and theory to compare the maximum energy stored under tension, torsion and bending for a specific type of carbon nanotube bundle and two nanothread bundle conformations – one straight and one helical.

Armchair nanotubes

Carbon nanotubes come in different widths and chiralities – the angle the carbon sheet is rolled along with respect to the honeycomb lattice. All these variations affect the nanotube’s properties. The team made its comparison using (10,10) “armchair” carbon nanotubes. These nanotubes are rolled perpendicular to the lattice lines giving half hexagon shapes ten carbon atoms long at the end of the tube. These are wide for carbon nanotubes, but they are also among the most commonly synthesized.

The research identified several drawbacks of using carbon nanotubes for energy storage – and several advantages of using carbon nanothreads. One problem with nanotubes is that they are prone to flatten when twisted or bent, which reduces the energy they can store under strain. In contrast, nanothreads retain their atomic conformation under strain. While individual carbon nanotubes have better properties for energy storage than single nanothreads, this advantage is not maintained when nanotubes are bundled. In contrast, bonds that can form between nanothreads where a hydrogen is missing make them better team players, so that a bundle with 19 nanothreads achieves 2.5 times the energy storage density of a bundle with three threads.

Another drawback of carbon nanotubes is that bundles can only sustain about a third of the torsion of a single nanotube, resulting in less favourable energy storage density in comparison with nanothreads.

“Carbon nanothread bundles could be made into twist-spun yarn-based artificial muscles that respond to electrical, chemical or photonic excitations,” suggests Zhang. “They could be a potential micro-scale power supply for anything from implanted biomedical sensing systems monitoring heart and brain functions, to small robotics and electronics.”

In pursuit of experimental validation of their results, the researchers are now working with scientists from Penn State University, where the world’s first Centre for Nanothread Chemistry has been established funded by the US National Science Foundation.  The collaborative team itself plans to focus its efforts over the next few years building microscale power supply systems based on the results of their research.

The research is described in Nature Communications.

Copyright © 2025 by IOP Publishing Ltd and individual contributors