Climate science and astronomy have much in common, and this has inspired the astrophysicist Travis Rector to call on astronomers to educate themselves, their students and the wider public about climate change. In this episode of the Physics World Weekly podcast, Rector explains why astronomers should listen to the concerns of the public when engaging about the science of global warming. And, he says the positive outlook of some of his students at the University of Alaska Anchorage makes him believe that a climate solution is possible.
Rector says that some astronomers are reluctant to talk to the public about climate change because they have not mastered the intricacies of the science. Indeed, one aspect of atmospheric physics that has challenged scientists is the role that clouds play in global warming. My second guest this week is the science journalist Michael Allen, who has written a feature article for Physics World called “Cloudy with a chance of warming: how physicists are studying the dynamical impact of clouds on climate change”. He talks about climate feedback mechanisms that involve clouds and how aerosols affect clouds and the climate.
A new algorithmic technique could enhance the output of fusion reactors by smoothing out the laser pulses used to compress hydrogen to fusion densities. Developed by physicists at the University of Bordeaux, France, a simulated version of the new technique has already been applied to conditions at the US National Ignition Facility (NIF) and could also prove useful at other laser fusion experiments.
A major challenge in fusion energy is keeping the fuel – a mixture of the hydrogen isotopes deuterium and tritium – hot and dense enough for fusion reactions to occur. The two main approaches to doing this confine the fuel with strong magnetic fields or intense laser light and are known respectively as magnetic confinement fusion and inertial confinement fusion (ICF). In either case, when the pressure and temperature become high enough, the hydrogen nuclei fuse into helium. Since the energy released in this fusion reaction is, in principle, greater than the energy needed to get it going, fusion has long been viewed as a promising future energy source.
In 2022, scientists at NIF became the first to demonstrate “energy gain” from fusion, meaning that the fusion reactions produced more energy than was delivered to the fuel target via the facility’s system of super-intense lasers. The method they used was somewhat indirect. Instead of compressing the fuel itself, NIF’s lasers heated a gold container known as a hohlraum with the fuel capsule inside. The appeal of this so-called indirect-drive ICF is that it is less sensitive to inhomogeneities in the laser’s illumination. These inhomogeneities arise from interactions between the laser beams and the highly compressed plasma produced during fusion, and they are hard to get rid of.
In principle, though, direct-drive ICF is a stronger candidate for a fusion reactor, explains Duncan Barlow, a postdoctoral researcher at Bordeaux who led the latest research effort. This is because it couples more energy into the target, meaning it can deliver more fusion energy per unit of laser energy.
Reducing computing cost and saving time
To work out which laser configurations are the most homogeneous, researchers typically use iterative radiation-hydrodynamic simulations. These are time-consuming and computationally expensive (requiring around 1 million CPU hours per evaluation). “This expense means that only a few evaluations were run, and each step was best performed by an expert who could use her or his experience and the data obtained to pick the next configurations of beams to test the illumination uniformity,” Barlow says.
The new approach, he explains, relies on approximating some of the laser beam-plasma interactions by considering isotropic plasma profiles. This means that each iteration uses less than 1000 CPU, so thousands can be run for the cost of a single simulation using the old method. Barlow and his colleagues also created an automated method to quantify improvements and select the most promising step forward for the process.
The researchers demonstrated their technique using simulations of a spherical target at NIF. These simulations showed that the optimized configuration should produce convergent shocks in the fuel target, resulting in pressures three times higher (and densities almost two times higher) than in the original experiment. Although their simulations focused on NIF, they say it could also apply to other pellet geometries and other facilities.
Developing tools
The study builds on work by Barlow’s supervisor, Arnaud Colaïtis, who developed a tool for simulating laser-plasma interactions that incorporates a phenomenon known as cross-beam energy transfer (CBET) that contributes to inhomogeneities. Even with this and other such tools, however, Barlow explains that fusion scientists have long struggled to define optical illuminations when the system deviates from a simple mathematical description. “My supervisor recognized the need for a new solution, but it took us a year of further development to identify such a methodology,” he says. “Initially, we were hoping to apply neural networks – similar to image recognition – to speed up the technique, but we realized that this required prohibitively large training data.”
As well as working on this project, Barlow is also involved in a French project called Taranis that aims to use ICF to produce energy – an approach known as inertial fusion energy (IFE). “I am applying the methodology from my ICF work in a new way to ensure the robust, uniform drive of targets with the aim of creating a new IFE facility and eventually a power plant,” he tells Physics World.
A broader physics application, he adds, would be to incorporate more laser-plasma instabilities beyond CBET that are non-linear and normally too expensive to model accurately with radiation-hydrodynamic simulations. Some examples include simulated Brillouin scattering, stimulated Raman scattering and two-plasmon decay. “The method presented in our work, which is detailed in Physical Review Letters, is a great accelerated scheme for better evaluating these laser-plasma instabilities, their impact for illumination configurations and post-shot analysis,” he says.
All eyes were on the election of Donald Trump as US president earlier this month, whose win overshadowed two big appointments in physics. First, the particle physicist Jun Cao took over as director of China’s Institute of High Energy Physics (IHEP) in October, succeeding Yifang Wang, who had held the job since 2011.
Over the last decade, IHEP has emerged as an important force in particle physics, with plans to build a huge 100 km-circumference machine called the Circular Electron Positron Collider (CEPC). Acting as a “Higgs factory”, such a machine would be hundreds of times bigger and pricier than any project IHEP has ever attempted.
But China is serious about its intentions, aiming to present a full CEPC proposal to the Chinese government next year, with construction staring two years later and the facility opening in 2035. If the CEPC opens as planned in 2035, China could leapfrog the rest of the particle-physics community.
China’s intentions will be one pressing issue facing the British particle physicist Mark Thomson, 58, who was named as the 17th director-general at CERN earlier this month. He will take over in January 2026 from current CERN boss Fabiola Gianotti, who will finish her second term next year. Thomson will have a decisive hand in the question of what – and where – the next particle-physics facility should be.
CERN is currently backing the 91 km-circumference Future Circular Collider (FCC), several times bigger than the Large Hadron Collider (LHC). An electron–positron collider designed to study the Higgs boson in unprecedented detail, it could later be upgraded to a hadron collider, dubbed FCC-hh. But with Germany already objecting to the FCC’s steep £12bn price tag, Thomson will have a tough job eking extra cash for it from CERN member states. He’ll also be busy ensuring the upgraded LHC, known as the High-Luminosity LHC, is ready as planned by 2030.
I wouldn’t dare tell Thomson how to do his job, but Physics World did once ask previous CERN directors-general what skills are needed as lab boss. Crucial, they said, were people management, delegation, communication and the ability to speak multiple languages. Physical stamina was deemed a vital attribute too, with extensive international travel and late-night working required.
One former CERN director-general even cited the need to “eat two lunches the same day to satisfy important visitors”. Squeezing double dinners in will probably be the least of Thomson’s worries.
Fortuantely, I bumped into Thomson at an Institute of Physics meeting in London earlier this week, where he agreed to do an interview with Physics World. So you can be sure we’ll get Thomson put his aims and priorities as next CERN boss on record. Stay tuned…
A new imaging technique that takes standard two-dimensional (2D) radio images and reconstructs them as three-dimensional (3D) ones could tell us more about structures such as the jet-like features streaming out of galactic black holes. According to the technique’s developers, it could even call into question physical models of how radio galaxies formed in the first place.
“We will now be able to obtain information about the 3D structures in polarized radio sources whereas currently we only see their 2D structures as they appear in the plane of the sky,” explains Lawrence Rudnick, an observational astrophysicist at the University of Minnesota, US, who led the study. “The analysis technique we have developed can be performed not only on the many new maps to be made with powerful telescopes such as the Square Kilometre Array and its precursors, but also from decades of polarized maps in the literature.”
Analysis of data from the MeerKAT radio telescope array
In their new work, Rudnick and colleagues in Australia, Mexico, the UK and the US studied polarized light data from the MeerKAT radio telescope array at the South African Radio Astronomy Observatory. They exploited an effect called Faraday rotation, which rotates the angle of polarized radiation as it travels through a magnetized ionized region. By measuring the amount of rotation for each pixel in an image, they can determine how much material that radiation passed through.
In the simplest case of a uniform medium, says Rudnick, this information tells us the relative distance between us and the emitting region for that pixel. “This allows us to reconstruct the 3D structure of the radiating plasma,” he explains.
An indication of the position of the emitting region
The new study builds on a previous effort that focused on a specific cluster of galaxies for which the researchers already had cubes of data representing its 2D appearance in the sky, plus a third axis given by the amount of Faraday rotation. In the latest work, they decided to look at this data in a new way, viewing the cubes from different angles.
“We realized that the third axis was actually giving us an indication of the position of the emitting region,” Rudnick says. “We therefore extended the technique to situations where we didn’t have cubes to start with, but could re-create them from a pair of 2D images.”
There is a problem, however, in that polarization angle can also rotate as the radiation travels through regions of space that are anything but uniform, including our own Milky Way galaxy and other intervening media. “In that case, the amount of radiation doesn’t tell us anything about the actual 3D structure of the emitting source,” Rudnick adds. “Separating out this information from the rest of the data is perhaps the most difficult aspect of our work.”
Shapes of structures are very different in 3D
Using this technique, Rudnick and colleagues were able determine the line-of-sight orientation of active galactic nuclei (AGN) jets as they are expelled from a massive black hole at the centre of the Fornax A galaxy. They were also able to observe how the materials in these jets interact with “cosmic winds” (essentially larger-scale versions of the magnetic solar wind streaming from our own Sun) and other space weather, and to analyse the structures of magnetic fields inside the jets from the M87 galaxy’s black hole.
The team found that the shapes of structures as inferred from 2D radio images were sometimes very different from those that appear in the 3D reconstructions. Rudnick notes that some of the mental “pictures” we have in our heads of the 3D structure of radio sources will likely turn out to be wrong after they are re-analysed using the new method. One good example in this study was a radio source that, in 2D, looks like a tangled string of filaments filling a large volume. When viewed in 3D, it turns out that these filamentary structures are in fact confined to a band on the surface of the source. “This could change the physical models of how radio galaxies are formed, basically how the jets from the black holes in their centres interact with the surrounding medium,” Rudnick tells Physics World.
A plan to use millions of smartphones to map out real-time variations in Earth’s ionosphere has been tested by researchers in the US. Developed by Brian Williams and colleagues at Google Research in California, the system could improve the accuracy of global navigation satellite systems (GNSSs) such as GPS and provide new insights into the ionosphere.
A GNSS uses a network of satellites to broadcast radio signals to ground-based receivers. Each receiver calculates its position based on the arrival times of signals from several satellites. These signals first pass through Earth’s ionosphere, which is a layer of weakly-ionized plasma about 50–1500 km above Earth’s surface. As a GNSS signal travels through the ionosphere, it interacts with free electrons and this slows down the signals slightly – an effect that depends on the frequency of the signal.
The problem is that the free electron density is not constant in either time or space. It can spike dramatically during solar storms and it can also be affected by geographical factors such as distance from the equator. The upshot is that variations in free electron density can lead to significant location errors if not accounted for properly.
To deal with this problem, navigation satellites send out two separate signals at different frequencies. These are received by dedicated monitoring stations on Earth’s surface and the differences between arrival times of the two frequencies is used create a real-time maps of the free electron density of the ionosphere. Such maps can then be used to correct location errors. However, these monitoring stations are expensive to install and tend to be concentrated in wealthier regions of the world. This results in large gaps in ionosphere maps.
Dual-frequency sensors
In their study, Williams’ team took advantage of the fact that many modern mobile phones have sensors that detect GNSS signals at two different frequencies. “Instead of thinking of the ionosphere as interfering with GPS positioning, we can flip this on its head and think of the GPS receiver as an instrument to measure the ionosphere,” Williams explains. “By combining the sensor measurements from millions of phones, we create a detailed view of the ionosphere that wouldn’t otherwise be possible.”
This is not a simple task, however, because individual smartphones are not designed for mapping the ionosphere. Their antennas are much less efficient than those of dedicated monitoring stations and the signals that smartphones receive are often distorted by surrounding buildings – and even users’ bodies. Also, these measurements are affected by the design of the phone and its GNSS hardware.
The big benefit of using smartphones is that their ownership is ubiquitous across the globe – including in developing regions such as India, Africa, and Southeast Asia. “In these parts of the world, there are still very few dedicated scientific monitoring stations that are being used by scientists to generate ionosphere maps,” says Williams. “Phone measurements provide a view of parts of the ionosphere that isn’t otherwise possible.”
The team’s proposal involves creating a worldwide network comprising millions of smartphones that will each carry out error correction measurements using the dual-frequency signals from GNSS satellites. Although each individual measurement will be relatively poor, the large number of measurements can be used to improve the overall accuracy of the map.
Simultaneous calibration
“By combining measurements from many phones, we can simultaneously calibrate the individual sensors and produce a map of ionosphere conditions, leading to improved location accuracy, and a better understanding of this important part of the Earth’s atmosphere,” Williams explains.
In their initial tests of the system, the researchers aggregated ionosphere measurements from millions of Android devices around the world. Crucially, there was no need to identify individual devices contributing to the study – ensuring the privacy and security of users.
Williams’ team was able to map a diverse array of variations in Earth’s ionosphere. These included plasma bubbles over India and South America; the effects of a small solar storm over North America; and a depletion in free electron density over Europe. These observations doubled the coverage are of existing maps and boosted resolution when compared to maps made using data from monitoring stations.
If such a smartphone-based network is rolled out, ionosphere-related location errors could be reduced by several metres – which would be a significant advantage to smartphone users.
“For example, devices could differentiate between a highway and a parallel rugged frontage road,” Williams predicts. “This could ensure that dispatchers send the appropriate first responders to the correct place and provide help more quickly.”
Waveguide-based structures can solve partial differential equations by mimicking elements in standard electronic circuits. This novel approach, developed by researchers at Newcastle University in the UK, could boost efforts to use analogue computers to investigate complex mathematical problems.
Many physical phenomena – including heat transfer, fluid flow and electromagnetic wave propagation, to name just three – can be described using partial differential equations (PDEs). Apart from a few simple cases, these equations are hard to solve analytically, and sometimes even impossible. Mathematicians have developed numerical techniques such as finite difference or finite-element methods to solve more complex PDEs. However, these numerical techniques require a lot of conventional computing power, even after using methods such as mesh refinement and parallelization to reduce calculation time.
Alternatives to numerical computing
To address this, researchers have been investigating alternatives to numerical computing. One possibility is electromagnetic (EM)-based analogue computing, where calculations are performed by controlling the propagation of EM signals through a materials-based processor. These processors are typically made up of optical elements such as Bragg gratings, diffractive networks and interferometers as well as optical metamaterials, and the systems that use them are termed “metatronic” by analogy with more familiar electronic circuit elements.
The advantage of such systems is that because they use EM waves, computing can take place literally at light speeds within the processors. Systems of this type have previously been used to solve ordinary differential equations, and to perform operations such as integration, differentiation and matrix multiplication.
Some mathematical operations can also be computed with electronic systems – for example, with grid-like arrays of “lumped” circuit elements (that is, components such as resistors, inductors and capacitors that produce a predictable output from a given input). Importantly, these grids can emulate the mesh elements that feature in the finite-element method of solving various types of PDEs numerically.
Recently, researchers demonstrated that this emulation principle also applies to photonic computing systems. They did this using the splitting and superposition of EM signals within an engineered network of dielectric waveguide junctions known as photonic Kirchhoff nodes. At these nodes, a combination of photonics structures, such as ring resonators and X-junctions, can similarly imitate lumped circuit elements.
Interconnected metatronic elements
In the latest work, Victor Pacheco-Peña of Newcastle’s School of Mathematics, Statistics and Physics and colleagues showed that such waveguide-based structures can be used to calculate solutions to PDEs that take the form of the Helmholtz equation ∇2f(x,y)+k2f(x,y)=0. This equation is used to model many physical processes, including the propagation, scattering and diffraction of light and sound as well as the interactions of light and sound with resonators.
Unlike in previous setups, however, Pacheco-Peña’s team exploited a grid-like network of parallel plate waveguides filled with dielectric materials. This structure behaves like a network of interconnected T-circuits, or metatronic elements, with the waveguide junctions acting as sampling points for the PDE solution, Pacheco-Peña explains. “By carefully manipulating the impedances of the metatronic circuits connecting these points, we can fully control the parameters of the PDE to be solved,” he says.
The researchers used this structure to solve various boundary value problems by inputting signals to the network edges. Such problems frequently crop up in situations where information from the edges of a structure is used to infer details of physical processes in other regions in it. For example, by measuring the electric potential at the edge of a semiconductor, one can calculate the distribution of electric potential near its centre.
Pacheco-Peña says the new technique can be applied to “open” boundary problems, such as calculating how light focuses and scatters, as well as “closed” ones, like sound waves reflecting within a room. However, he acknowledges that the method is not yet perfect because some undesired reflections at the boundary of the waveguide network distort the calculated PDE solution. “We have identified the origin of these reflections and proposed a method to reduce them,” he says.
In this work, which is detailed in Advanced Photonics Nexus, the researchers numerically simulated the PDE solving scheme at microwave frequencies. In the next stages of their work, they aim to extend their technique to higher frequency ranges. “Previous works have demonstrated metatronic elements working in these frequency ranges, so we believe this should be possible,” Pacheco-Peña tells Physics World. “This might also allow the waveguide-based structure to be integrated with silicon photonics or plasmonic devices.”
UK physics “deep tech” could be missing out on almost a £1bn of investment each year. That is according to a new report by the Institute of Physics (IOP), which publishes Physics World. It finds that venture capital investors often struggle to invest in high-innovation physics industries given the lack of a “one-size-fits-all” commercialisation pathway that is seen in others areas such as biotech.
According to the report, physics-based businesses add about £230bn to the UK economy each year and employ more than 2.7 million full-time employees. The UK also has one of the largest venture-capital markets in Europe and the highest rates of spin-out activity, especially in biotech.
Despite this, however, venture capital investment in “deep tech” physics – start-ups whose business model is based on high-tech innovation or significant scientific advances – remains low, attracting £7.4bn or 30% of UK science venture-capital investment.
To find out the reasons for this discrepancy, the IOP interviewed science-led businesses as well as 32 leading venture capital investors. Based on these discussions, it was found that many investors are confused about certain aspects of physics-based start-ups, finding that they often do not follow the familiar lifecycle of development as seen other areas like biotech.
Physics businesses are not, for example, always able to transition from being tech focussed to being product-led in the early stages of development, which prevents venture capitalists from committing large amounts of money. Another issue is that venture capitalists are less familiar with the technologies, timescales and “returns profile” of physics deep tech.
The IOP report estimates that if the full investment potential of physics deep tech is unlocked then it could result in an extra £4.5bn of additional funding over the next five years. In a foreword to the report, Hermann Hauser, the tech entrepreneur and founder of Acorn Computers, highlights “uncovered issues within the system that are holding back UK venture capital investment” into physics-based tech. “Physics deep-tech businesses generate huge value and have unique characteristics – so our national approach to finance for these businesses must be articulated in ways that recognise their needs,” writes Hauser.
Physics deep tech is central to the UK’s future prosperity
Tom Grinyer
At the same time, investors see a lot of opportunity in subjects such as quantum and semiconductor physics as well as with artificial intelligences and nuclear fusion. Jo Slota-Newson, a managing partner at Almanac Ventures who co-wrote the report, says there is “huge potential” for physics deep-tech businesses but “venture capital funds are being held back from raising and deploying capital to support this crucial sector”.
The IOP is now calling for a coordinated effort from government, investors as well as the business and science communities to develop “investment pathways” to address the issues raised in the report. For example, the UK government should ensure grant and debt-financing options are available to support physics tech at “all stages of development”.
Slota-Newson, who has a background in science including a PhD in chemistry from the University of Cambridge, says that such moves should be “at the heart” of the UK’s government’s plans for growth. “Investors, innovators and government need to work together to deliver an environment where at every stage in their development there are opportunities for our deep tech entrepreneurs to access funding and support,” adds Slota-Newson. “If we achieve that we can build the science-driven, innovative economy, which will provide a sustainable future of growth, security and prosperity.”
The report also says that the IOP should play a role by continuing to highlight successful physics deep-tech businesses and to help them attract investment from both the UK and international venture-capital firms. Indeed, Tom Grinyer, group chief executive officer of the IOP, says that getting the model right could “supercharge the UK economy as a global leader in the technologies that will define the next industrial revolution”.
“Physics deep tech is central to the UK’s future prosperity — the growth industries of the future lean very heavily on physics and will help both generate economic growth and help move us to a lower carbon, more sustainable economy,” says Grinyer. “By leveraging government support, sharing information better and designing our financial support of this key sector in a more intelligent way we can unlock billions in extra investment.”
That view is backed by Hauser. “Increased investment, economic growth, and solutions to some of our biggest societal challenges [will move] us towards a better world for future generations,” he writes. “The prize is too big to miss”.
Sound absorption Incoming acoustic waves induce relative movements among the fibres, initiating the triboelectric effect within overlapping regions. The generated charges are dissipated through conductive elements and eventually transformed into heat. (Courtesy: Nat. Commun. 10.1038/s41467-024-53847-5)
Noise pollution is becoming increasingly common in society today, impacting both humans and wildlife. While loud noises can be an inconvenience, if it’s something that happens regularly, it can have an adverse effect on human health that goes beyond a mild irritation.
As such noise pollution gets worse, researchers are working to mitigate its impact through new sound absorption materials. A team headed up the Agency for Science, Technology and Research (A*STAR) in Singapore has now developed a new approach to tackling the problem by absorbing sound waves using the triboelectric effect.
The World Health Organization has defined noise pollution as noise levels as above 65 dB, with one in five Europeans being regularly exposed to levels considered harmful to their health. “The adverse impacts of airborne noise on human health are growing concern, including disturbing sleep, elevating stress hormone levels, inciting inflammation and even increasing the risk of cardiovascular diseases,” says Kui Yao, senior author on the study.
Passive provides the best route
Mitigating noise requires conversion of the mechanical energy in acoustic waves into another form. For this, passive sound absorbers are a better option than active versions because they require less maintenance and consume no power (so don’t require a lot of extra components to work).
Previous efforts from Yao’s research group have shown that the piezoelectric effect – the process of creating a current when a material undergoes mechanical stress – can convert mechanical energy into electricity and could be used for passive sound absorption. However, the researchers postulated that the triboelectric effect – the process of electrical charge transfer when two surfaces contact each other – could be more effective for absorbing low-frequency noise.
The triboelectric effect is more commonly applied for harvesting mechanical energy, including acoustic energy. But unlike when used for energy harvesting, the use of the triboelectric effect in noise mitigation applications is not limited by the electronics around the material, which can cause impedance mismatching and electrical leakage. For sound absorbers, therefore, there’s potential to create a device with close to 100% efficient triboelectric conversion of energy.
Exploiting the triboelectric effect
Yao and colleagues developed a fibrous polypropylene/polyethylene terephthalate (PP/PET) composite foam that uses the triboelectric effect and in situ electrical energy dissipation to absorb low-frequency sound waves. In this foam, sound is converted into electricity through embedded electrically conductive elements, and this electricity is then dissipated into heat and removed from the material.
The energy dissipation mechanism requires triboelectric pairing materials with a large difference in charge affinity (the tendency to gain or lose charge from/to the other material). The larger the difference between the two fibre materials in the foam, the better the acoustic absorption performance due to the larger triboelectric effect.
To understand the effectiveness of different foam compositions for absorbing and converting sound waves, the researchers designed an acoustic impedance model to analyse the underlying sound absorption mechanisms. “Our theoretical analysis and experimental results show superior sound absorption performance of triboelectric energy dissipator-enabled composite foams over common acoustic absorbing products,” explains Yao.
The researchers first tested the fibrous PP/PET composite foam theoretically and experimentally and found that it had a high noise reduction coefficient (NRC) of 0.66 (over a broad low-frequency range). This translates to a 24.5% improvement in sound absorption performance compared with sound absorption foams that don’t utilize the triboelectric effect.
On the back of this result, the researchers validated their process further by testing other material combinations. This included: a PP/polyvinylidene fluoride (PVDF) foam with an NRC of 0.67 and 22.6% improvement in sound absorption performance; a glass wool/PVDF foam with an NRC of 0.71 and 50.6% improvement in sound absorption performance; and a polyurethane/PVDF foam with an NRC of 0.79 and 43.6% improvement in sound absorption performance.
All the improvements are based on a comparison against their non-triboelectric counterparts – where the sound absorption performance varies from composition to composition, hence the non-linear relationship between percentage values and NRC values. The foams also showed a sound absorption performance of 0.8 NRC at 800 Hz and around 1.00 NRC with sound waves above 1.4 kHz, compared with commercially available counterpart absorber materials.
When asked about the future of the sound absorbers, Yao tells Physics World: “We are continuing to improve the performance properties and seeking collaborations for adoption in practical applications”.
For all of us concerned about climate change, 2023 was a grim year. According to the World Meteorological Organisation (WMO), it was the warmest year documented so far, with records broken – and in some cases smashed – for ocean heat, sea-level rise, Antarctic sea-ice loss and glacier retreat.
Capping off the warmest 10-year period on record, global average near-surface temperature hit 1.45 °C above pre-industrial levels. “Never have we been so close – albeit on a temporary basis at the moment – to the 1.5 °C lower limit of the Paris Agreement on climate change,” said WMO secretary-general Celeste Saulo in a statement earlier this year.
The heatwaves, floods, droughts and wildfires of 2023 are clear signs of the increasing dangers of the climate crisis. As we look to the future and wonder how much the world will warm, accurate climate models are vital.
For the physicists who build and run these models, one major challenge is figuring out how clouds are changing as the world warms, and how those changes will impact the climate system. According to the Intergovernmental Panel on Climate Change (IPCC), these feedbacks create the biggest uncertainties in predicting future climate change.
Cloud cover, high and low
Clouds play a key role in the climate system, as they have a profound impact on the Earth’s radiation budget. That is the balance between the amount of energy coming in from solar radiation, and the amount of energy going back out to space, which is both the reflected (shortwave) and thermal (longwave) energy radiated from the Earth.
How energy flows into and away from the Earth. Based on data from multiple sources including NASA’s CERES satellite instrument, which measures reflected solar and emitted infrared radiation fluxes. All values are fluxes in watts per square metre and are average values based on 10 years of data. First published in 2014.
“Even a subtle change in global cloud properties could be enough to have a noticeable effect on the global energy budget and therefore the amount of warming,” explains climate scientist Paulo Ceppi of Imperial College London, who is an expert on the impact of clouds on global climate.
A key factor in this dynamic is “cloud fraction” – a measurement that climate scientists use to determine the percentage of the Earth covered by clouds at a given time. More specifically, it’s the portion of the Earth’s surface covered by cloud, relative to the portion that is uncovered. Cloud fraction is determined via satellite imagery and is the portion of each pixel (1-km-pixel resolution cloud mask) in an image that is covered by clouds (figure 2).
Apart from the amount of cover, what also matter are the altitude of clouds and their optical thickness. Higher, cooler clouds absorb more thermal energy originating from the Earth’s surface, and therefore have a greater greenhouse warming effect than low clouds. They also tend to be thinner, so they let more sunlight through and overall have a net warming effect. Low clouds, on the other hand, have a weak greenhouse effect, but tend to be thicker and reflect more solar radiation. They generally have a net cooling effect.
2 Cloud fraction
These maps show what fraction of an area was cloudy on average each month, according to measurements collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite. MODIS collects information in gridded boxes, or pixels. Cloud fraction is the portion of each pixel that is covered by clouds. Colours range from blue (no clouds) to white (totally cloudy).
The band of persistent clouds around the equator is the Intertropical Convergence Zone – where the easterly trade winds in the Northern and Southern Hemispheres meet, pushing warm, moist air high into the atmosphere. The air expands and cools, and the water vapour condenses into clouds and rain. The cloud band shifts slightly north and south of the equator with the seasons. In tropical countries, this shifting of the zone is what causes rainy and dry seasons.
Video and data courtesy: NASA Earth Observations
As the climate warms, cloud properties are changing, altering the radiation budget and influencing the amount of warming. Indeed, there are two key changes: rising cloud tops and a reduction in low cloud amount.
The most understood effect, Ceppi explains, is that as global temperatures increase, clouds rise higher into the troposphere, which is the lowermost atmospheric layer. This is because as the troposphere warms it expands, increasing to greater altitudes. Over the last 40 years the top of the troposphere, known as the tropopause, has risen by about 50 metres per decade (Sci. Adv. 10.1126/sciadv.abi8065).
“You are left with clouds that rise higher up on average, so have a greater greenhouse warming effect,” Ceppi says. He adds that modelling data and satellite observations support the idea that cloud tops are rising.
Conversely, coverage of low clouds, which reflect sunlight and cool the Earth’s surface, is decreasing with warming. This reduction is mainly in marine low clouds over tropical and subtropical regions. “We are talking a few per cent, so not something that you would necessarily notice with your bare eyes, but it’s enough to have an effect of amplifying global warming,” he adds.
These changes in low clouds are partly responsible for some of the extreme ocean heatwaves seen in recent years (figure 3). While the mechanisms behind these events are complex, one known driver is this reduction in low cloud cover, which allows more solar radiation to hit the ocean (Science 325 460).
“It’s cloud feedback on a more local scale,” Ceppi says. “So, the ocean surface warms locally and that prompts low cloud dissipation, which leads to more solar radiation being absorbed at the surface, which prompts further warming and therefore amplifies and sustains those events.”
Sea surface temperature anomaly (°C) for the month of June 2023, relative to the 1991–2020 reference period. The global ocean experienced an average daily marine heatwave coverage of 32%, well above the previous record of 23% in 2016. At the end of 2023, most of the global ocean between 20° S and 20° N had been in heatwave conditions since early November.
Despite these insights, several questions remain unanswered. For example, Ceppi explains that while we know that low cloud changes will amplify warming, the strength of these effects needs further investigation, to reduce the uncertainty range.
Also, as high clouds move higher, there may be other important changes, such as shifts in optical thickness, which is a measure of how much light is scattered or absorbed by cloud droplets, instead of passing through the atmosphere. “We are a little less certain about what else happens to [high clouds],” says Ceppi.
Diurnal changes
It’s not just the spatial distribution of clouds that impacts climate. Recent research has found an increasing asymmetry in cloud-cover changes between day and night. Simply put, daytime clouds tend to cool Earth’s surface by reflecting solar radiation, while at night clouds trap thermal radiation and have a warming effect. This shift in diurnal distribution could create a feedback loop that amplifies global warming.
By analysing satellite observations and data from the sixth phase of the Coupled Model Intercomparison Project (CMIP6) – which incorporates historical data collected between 1970 and 2014 as well as projections up to the year 2100 – the researchers concluded that this diurnal asymmetry is largely due to rising concentrations of greenhouse gases that make the lower troposphere more stable, which in turn increases the overall heating.
Fewer clouds form during the day, thereby reducing the amount of shortwave radiation that is reflected away. Night-time clouds are more stable, which in turn increases the longwave greenhouse effect. “Our study shows that this asymmetry causes a positive feedback loop that amplifies global warming,” says Quaas. This growing asymmetry is mainly driven by a daytime increase in turbulence in the lower troposphere as the climate warms, meaning that clouds are less likely to form and remain stable during the day.
Mixed-phase clouds
Climate models are affected by more than just the distribution of clouds in space. What also matters is the distribution of liquid water and ice within clouds. In fact, researchers have found that the way in which models simulate this effect influences their predictions of warming in response to greenhouse gas emissions.
So-called “mixed-phase” clouds are those that contain water vapour, ice particles and supercooled liquid droplets, and exist in a three-phase colloidal system. Such clouds are ubiquitous in the troposphere. These clouds are found at all latitudes from the polar regions to the tropics and they play an important role in the climate system.
As the atmosphere warms, mixed-phase clouds tend to shift from ice to liquid water. This transition makes these clouds more reflective, enhancing their cooling effect on the Earth’s surface – a negative feedback that dampens global warming.
In 2016 Trude Storelvmo, an atmospheric scientist at the University of Oslo in Norway, and her colleagues made an important discovery: many climate models overestimate this negative feedback (Geophys. Res. Lett. 10.1029/2023GL105053). Indeed, the models often simulate clouds with too much ice and not enough liquid water. This error exaggerates the cooling effect from the phase transition. Essentially, the clouds in these simulations have too much ice to lose, causing the models to overestimate the increase in their reflectiveness as they warm.
One problem is that these models oversimplify cloud structure, failing to capture the true heterogeneity of mixed-phase clouds. Satellite, balloon and aircraft observations reveal that these clouds are not uniformly mixed, either vertically or horizontally. Instead, they contain pockets of ice and liquid water, leading to complex interactions that are inadequately represented in the simulations. As a result, they overestimate ice formation and underestimate liquid cloud development.
Storelvmo’s work also found that initially, increased cloud reflectivity has a strong effect that helps mitigate global warming. But as the atmosphere continues to warm, the increase in reflectiveness slows. This shift is intuitive: as the clouds become more liquid, they have less ice to lose. At some point they become predominantly liquid, eliminating the phase transition. The clouds cannot become anymore liquid – and thus reflective – and warming accelerates.
Liquid cloud tops
Earlier this year, Storelvmo and colleagues carried out a new study, using satellite data to study the vertical composition of mixed-phase clouds. The team discovered that globally, these clouds are more liquid at the top (Commun. Earth Environ.5 390).
Storelvmo explains that this top cloud layer is important as “it is the first part of the cloud that radiation interacts with”. When the researchers adjusted climate models to correctly capture this vertical composition, it had a significant impact, triggering an additional degree of warming in a “high-carbon emissions” scenario by the end of this century, compared with current climate projections.
“It is not inconceivable that we will reach temperatures where most of [the negative feedback from clouds] is lost, with current CO2 emissions,” says Storelvmo. The point at which this happens is unclear, but is something that scientists are actively working on.
The study also revealed that while changes to mixed-phased clouds in the northern mid-to-high latitudes mainly influence the climate in the northern hemisphere, changes to clouds in the same southern latitudes have global implications.
“When we modify clouds in the southern extratropic that’s communicated all the way to the Arctic – it’s actually influencing warming in the arctic,” says Storelvmo. The reasons for this are not fully understood, but Storelvmo says other studies have seen this effect too.
“It’s an open and active area of research, but it seems that the atmospheric circulation helps pass on perturbations from the Southern Ocean much more efficiently than northern perturbations,” she explains.
The aerosol problem
As well as generating the greenhouse gases that drive the climate crisis, fossil fuel burning also produces aerosols. The resulting aerosol pollution is a huge public health issue. The recent “State of Global Air Report 2024” from the Health Effects Institute found that globally eight million people died because of air pollution in 2021. Dirty air is also now the second-leading cause of death in children under five, after malnutrition.
To tackle these health implications, many countries and organizations have introduced air-quality clean-up policies. But cleaning up air pollution has an unfortunate side-effect: it exacerbates the climate crisis. Indeed, a recent study has even warned that aggressive aerosol mitigation policies will hinder our chances of keeping global warming below 2 °C (Earth’s Future 10.1029/2023EF004233).
Deadly conundrum According to some measures, Lahore in Pakistan is the city with the worst air pollution in the world. Air pollution is responsible for tens of millions of deaths every year. But improving air quality can actually exacerbate the climate crisis, as it decreases the small particles in clouds, which are key to reflecting radiation. (Courtesy: Shutterstock/A M Syed)
When you add small pollution particles to clouds, explains Haywood, it creates “clouds that are made up of a larger number of small cloud droplets and those clouds are more reflective”. The shrinking in cloud droplet size can also reduce precipitation – adding more liquid water in clouds. The clouds therefore last longer, cover a greater area and become more reflective.
But if atmospheric aerosol concentrations are reduced, so too are these reflective, planet-cooling effects. “This masking effect by the aerosols is taken out and we unveil more and more of the full greenhouse warming,” says Quaas.
A good example of this is recent policy aimed at cleaning up shipping fuels by lowering sulphur concentrations. At the start of 2020 the International Maritime Organisation introduced regulations that slashed the limit on sulphur content in fuels from 3.5% to 0.5%.
Haywood explains that this has reduced the additional reflectivity that this pollution created in clouds and caused a sharp increase in global warming rates. “We’ve done some simulations with climate models, and they seem to be suggestive of at least three to four years acceleration of global warming,” he adds.
Overall models suggest that if we remove all the world’s polluting aerosols, we can expect to see around 0.4 °C of additional warming, says Quaas. He acknowledges that we must improve air quality “because we cannot just accept people dying and ecosystems deteriorating”. By doing so, we must also be prepared for this additional warming. But more work is needed, “because the current uncertainty is too large”, he continues. Uncertainty in the figures is around 50%, according to Quaas, which means that slashing aerosol pollution could cause anywhere from 0.2 to 0.6 °C of additional warming.
Haywood says that while current models do a relatively good job of representing how aerosols reduce cloud droplet size and increase cloud brightness, they do a poor job of showing how aerosols effect cloud fraction.
Cloud manipulation
The fact that aerosols cool the planet by brightening clouds opens an obvious question: could we use aerosols to deliberately manipulate cloud properties to mitigate climate change?
“There are more recent proposals to combat the impacts, or the worst of the impacts of global warming, through either stratospheric aerosol injection or marine cloud brightening, but they are really in their infancy and need to be understood an awful lot better before any kind of deployment can even be considered,” says Haywood. “You need to know not just how the aerosols might interact with clouds, but also how the cloud then interacts with the climate system and the [atmospheric] teleconnections that changing cloud properties can induce.”
Haywood recently co-authored a position paper, together with a group of atmospheric scientists in the US and Europe, arguing that a programme of physical science research is needed to evaluate the viability and risks of marine cloud brightening (Sci. Adv. 10 eadi8594).
A proposed form of solar radiation management, known as marine cloud brightening, would involve injecting aerosol particles into low-level, liquid marine clouds – mainly those covering large areas of subtropical oceans – to increase their reflectiveness (figure 4).
Most marine cloud-brightening proposals suggest using saltwater spray as the aerosol. In theory, when sprayed into the air the saltwater would evaporate to produce fine haze particles, which would then be transported by air currents into cloud. Once in the clouds, these particles would increase the number of cloud droplets, and so increase cloud brightness.
In this proposal, ship-based generators would ingest seawater and produce fine aerosol haze droplets with an equivalent dry diameter of approximately 50 nm. In optimal conditions, many of these haze droplets would be lofted into the cloud by updrafts, where they would modify cloud microphysics processes, such as increasing droplet number concentrations, suppressing rain formation, and extending the coverage and lifetime of the clouds. At the cloud scale, the degree of cloud brightening and surface cooling would depend on how effectively the droplet number concentrations can be increased, droplet sizes reduced, and cloud amount and lifetime increased.
Feingold, an author on the position paper, says that a key challenge lies in predicting how additional particles will affect cloud properties. For instance, while more haze droplets might theoretically brighten clouds, it could also lead to unintended effects like increased evaporation or rain, which could even reduce cloud coverage.
Another difficult challenge is the inconstancy of cloud response to aerosols. “Ship traffic is really regular,” explains Feingold, “but if you look at satellite imagery on a daily basis in a certain area, sometimes you see really clear, beautiful ship tracks and other times you don’t – and the ship traffic hasn’t changed but the meteorology has.” This variability depends on cloud susceptibility to aerosols, which is influenced by meteorological conditions.
And even if cloud systems that respond well to marine cloud brightening are identified, it would not be sensible to repeatedly target them. “Seeding the same area persistently could have some really serious knock-on effects on regional temperature and rainfall,” says Feingold.
Essentially, aerosol injections into the same area day after day would create localized radiative cooling, which would impact regional climate patterns. This highlights the ethical concerns with cloud brightening, as such effects could benefit some regions while negatively impacting others.
Addressing many of these questions requires significant advances in current climate models, so that the entire process – from the effects of aerosols on cloud microphysics through to the larger impact on clouds and then global climate circulations – can be accurately simulated. Bridging these knowledge gaps will require controlled field experiments, such as aerosol releases from point sources in areas of interest, while taking observational data using tools like drones, aeroplanes and satellites. Such experiments would help scientists get a “handle on this connection between emitted particles and brightening”, says Feingold.
But physicists can only do so much. “We are not trying to push marine cloud brightening, we are trying to understand it,” says Feingold. He argues that a parallel effort to discuss the governance of marine cloud brightening is also needed.
In recent years, much progress has been made in determining the impact of clouds, when it comes to regulating our planet’s climate, and their importance in climate modelling. “While major advances in the understanding of cloud processes have increased the level of confidence and decreased the uncertainty range for the cloud feedback by about 50% compared to AR5 [IPCC report], clouds remain the largest contribution to overall uncertainty in climate feedbacks (high confidence),” states the IPCC’s latest Assessment Report (AR6), published in 2021. Physicists and atmospheric scientists will continue to study how cloud systems will respond to our ever-changing climate and planet, but ultimately, it is wider society that needs to decide the way forward.
Working principle Illustration of the single-crystal (a) and cascade-connected two-crystal (b) devices under X-ray irradiation. (c) Time-resolved photocurrent responses of the two devices. (Courtesy: CC BY 4.0/ACS Cent. Sci. 10.1021/acscentsci.4c01296)
X-ray imaging plays an indispensable role in diagnosing and staging disease. Nevertheless, exposure to high doses of X-rays has potential for harm, and much effort is focused towards reducing radiation exposure while maintaining diagnostic function. With this aim, researchers at the King Abdullah University of Science and Technology (KAUST) have shown how interconnecting single-crystal devices can create an X-ray detector with an ultralow detection threshold.
The team created devices using lab-grown single crystals of methylammonium lead bromide (MAPbBr3), a perovskite material that exhibits considerable stability, minimal ion migration and a high X-ray absorption cross-section – making it ideal for X-ray detection. To improve performance further, they used cascade engineering to connect two or more crystals together in series, reporting their findings in ACS Central Science.
X-rays incident upon a semiconductor crystal detector generate a photocurrent via the creation of electron–hole pairs. When exposed to the same X-ray dose, cascade-connected crystals should exhibit the same photocurrent as a single-crystal device (as they generate equal net concentrations of electron–hole pairs). The cascade configuration, however, has a higher resistivity and should thus have a much lower dark current, improving the signal-to-noise ratio and enhancing the detection performance of the cascade device.
To test this premise, senior author Omar Mohammed and colleagues grew single crystals of MAPbBr3. They first selected four identical crystals to evaluate (SC1, SC2, SC3 and SC4), each 3 x 3 mm in area and approximately 2 mm thick. Measuring various optical and electrical properties revealed high consistency across the four samples.
“The synthesis process allows for reproducible production of MAPbBr3 single crystals, underscoring their strong potential for commercial applications,” says Mohammed.
Optimizing detector performance
Mohammed and colleagues fabricated X-ray detectors containing a single MAPbBr3 perovskite crystal (SC1) and detectors with two, three and four crystals connected in series (SC1−2, SC1−3 and SC1−4). To compare the dark currents of the devices they irradiated each one with X-rays under a constant 2 V bias voltage. The cascade-connected SC1–2 exhibited a dark current of 7.04 nA, roughly half that generated by SC1 (13.4 nA). SC1–3 and SC1–4 reduced the dark current further, to 4 and 3 nA, respectively.
The researchers also measured the dark current for the four devices as the bias voltage changed from 0 to -10 V. They found that SC1 reached the highest dark current of 547 nA, while SC1–2, SC1–3 and SC1–4 showed progressively decreasing dark currents of 134, 90 and 50 nA, respectively. “These findings highlight the effectiveness of cascade engineering in reducing dark current levels,” Mohammed notes.
Next, the team assessed the current stability of the devices under continuous X-ray irradiation for 450 s. SC1–2 exhibited a stable current response, with a skewness value of just 0.09, while SC1, SC1–3 and SC1–4 had larger skewness values of 0.75, 0.45 and 0.76, respectively.
The researchers point out that while connecting more single crystals in series reduced the dark current, increasing the number of connections also lowered the stability of the device. The two-crystal SC1–2 represents the optimal balance.
Low-dose imaging
One key component required for low-dose X-ray imaging is a low detection threshold. The conventional single-crystal SC1 showed a detection limit of 590 nGy/s under a 2 V bias. SC1–2 decreased this limit to 100 nGy/s – the lowest of all four devices and surpassing the existing record achieved by MAPbBr3 perovskite devices under near-identical conditions.
Spatial resolution is another important consideration. To assess this, the researchers estimated the modulation transfer function (the level of original contrast maintained by the detector) for each of the four devices. They found that SC1–2 exhibited the best spatial resolution of 8.5 line pairs/mm, compared with 5.6, 5.4 and 4 line pairs/mm for SC1, SC1–3 and SC1–4, respectively.
Optimal imaging Actual and X-ray images of a key and a raspberry with a needle obtained by the SC1 to SC1–4 devices. (Courtesy: CC BY 4.0/ACS Cent. Sci. 10.1021/acscentsci.4c01296)
Finally, the researchers performed low-dose X-ray imaging experiments using the four devices, first imaging a key at a dose rate of 3.1 μGy/s. SC1 exhibited an unclear image due to the unstable current affecting its resolution. Devices SC1–2 to SC1–4 produced clearer images of the key, with SC1–2 showing the best image contrast.
They also imaged a USB port at a dose rate of 2.3 μGy/s, a metal needle piercing a raspberry at 1.9 μGy/s and an earring at 750 nGy/s. In all cases, SC1–2 exhibited the highest quality image.
The researchers conclude that the cascade-engineered configuration represents a significant shift in low-dose X-ray detection, with potential to advance applications that require minimal radiation exposure combined with excellent image quality. They also note that the approach works with different materials, demonstrating X-ray detection using cascaded cadmium telluride (CdTe) single crystals.
Mohammed says that the team is now investigating the application of the cascade structure in other perovskite single crystals, such as FAPbI3 and MAPbI3, with the goal of reducing their detection limits. “Moreover, efforts are underway to enhance the packaging of MAPbBr3 cascade single crystals to facilitate their use in dosimeter detection for real-world applications,” he tells Physics World.