Skip to main content

Soft robot dives 10 km under the ocean

A soft, self-powered robot, capable of swimming in the deepest regions of Earth’s oceans has been created by researchers in China. Inspired by the hadal snailfish, the team led by Guorui Li at Zhejiang University designed its device to feature flapping fins, and decentralized electronics encased in a deformable silicone body. Having successfully demonstrated the design in the Mariana Trench, their innovations could lead to new ways of exploring some of the most remote regions of the oceans.

The scope of human exploration has extended to even the most inhospitable environments on land, but the deepest regions of Earth’s oceans remain almost entirely unexplored. At depths below 3000 m, extreme pressures experienced by exploration vessels make it very difficult to design robust electronic components required for onboard power, control, and thrust. If these components are closely packed together on a rigid circuit board, pressure-induced shear stresses can cause them to fail at their interfaces.

To overcome these challenges, researchers seek inspiration from the many organisms that thrive at such depths. In their study, Li’s team considered the hadal snailfish, which was recently discovered at depths exceeding 8000 m in the Pacific Ocean. These strange creatures have several features that give them high adaptability and mobility, even at extreme pressures: including a distributed, highly deformable skull, and flapping pectoral fins.

Decentralized electronics

Imitating these features, the researchers designed a pressure-resilient electronics system, which could be fully encased in a soft silicone body. Like the skull of a snailfish, the team decentralized the components of this system – either by increasing the distances between components, or by separating them into several smaller circuit boards. This allowed them to reduce maximum shear stresses at component interfaces by 17%, making them far more resilient to extreme pressures.

To imitate the bird-like flapping fins of the snailfish, Li and colleagues designed artificial muscles using dielectric elastomers: rubber-like materials that convert electrical energy into mechanical work. By sandwiching a compliant electrode between two dielectric elastomer membranes, the researchers could generate flapping in two silicone films, which they supported using elastic frames.

Li’s team tested the performance of their robot at the bottom of the Mariana Trench, some 10,900 m beneath the ocean surface. Their device was powered by an onboard lithium-ion battery, and fitted with a high-voltage amplifier, video cameras, and LED lights. Even at pressures exceeding 1000 atmospheres, it maintained a flapping motion for 45 min. Further tests in the South China Sea (see video), alongside experiments in a pressure chamber, demonstrated that the device could swim freely and resiliently at speeds exceeding 5 cm/s.

Li and colleagues now hope that their design could be extended to enable more complex tasks, including sensing and communications. They will now focus on developing new materials and structures to enhance the intelligence, versatility, and efficiency of soft robots – further improving their ability to operate in extreme conditions.

The robot is described in Nature.

Mantis shrimp inspires hyperspectral and polarimetric light sensor

A novel hyperspectral and polarimetric optical sensor that’s small enough to fit on a smartphone has been developed by researchers in the US and Korea. The device consists of an alternating stack of polarization-sensitive organic photovoltaics (P-OPVs) and folded polymer retarders, and can detect four spectral and three polarization channels. However, the researchers claim that the design could ultimately sense 15 spectral channels over the visible spectrum.

Hyperspectral and polarization imaging has the potential to revolutionize many fields, from biomedicine to astronomy. Hyperspectral imaging allows visible light to be sensed in more narrow bands than are visible to the human eye. This can be useful, for example, for determining the chemical composition of objects, identifying hazardous gases, or detecting subtle differences in tissue composition for medical diagnosis. Polarimetry, on the other hand, measures polarization in light, providing useful information on surface geometry and subsurface detail of objects.

Current devices for measuring spectral and polarimetric information simultaneously, known as spectral polarization imaging, are large and expensive, and have image quality issues. To build a smaller, more user-friendly sensor, Ali Altaqui of North Carolina State University and his colleagues turned to the mantis shrimp.

Mantis shrimp can detect 12 different spectral channels, or colours, a huge step up from the three – red, green and blue – that humans can see. They can also analyse the polarization of light. These marine crustaceans use this advanced vision as a tool for navigation, communication, object separation and predator evasion.

The compound eye of mantis shrimp contains 12 spectrally selective photoreceptors – with sensitivity ranging from ultraviolet to far-red – and four elements that are sensitive to circular polarization, vertically stacked along a single optical axis. As light propagates into the stack, the mantis shrimp extracts spectral and polarization information.

Taking inspiration from these crustaceans, Altaqui and his colleagues’ sensor is comprised of spectrally selective elements, the folded polymer retarders, and P-OPVs vertically stacked along a single optical axis.

The device simultaneously detects spectral and polarization information in a similar way to the mantis shrimp’s eye. The first two P-OPVs detect the light’s polarization state; then an alternating arrangement of folded retarders and P-OPVs provides the spectral analysis.

Altaqui tells Physics World that when broadband light enters the device it is vertically polarized, by a polarizer. Then the first folded retarder rotates red light by 90°, polarizing it horizontally while leaving the other colours polarized vertically. Next, the light hits a P-OPV element. As this is polarized vertically, it absorbs the red light, while transmitting all other wavelengths.

The next folded retarder only rotates yellow light by 90°. And the following P-OPV element absorbs this yellow light, while transmitting the other colours. “This process is repeated where different folded retarders will rotate different colours by 90 degrees, allowing different P-OPVs to absorb the rotated colours,” Altaqui explains.

Pratik Sen, a co-author of the paper, published in Science Advances, says: “Organic semiconductors are interesting materials because they are semi-transparent and can be fabricated to induce intrinsic sensitivity to polarized light. This means we can integrate them into new and exciting device architectures that would not be possible with some of the more traditional semiconductor materials like silicon.”

Altaqui says that the sensor could have a wide range of applications, including in fields such as biomedical imaging, agriculture and food safety, defence, astronomy, atmospheric monitoring and machine vision. For example, it could be used for the early diagnosis of skin cancer, to assess the quality of crops or to characterize aerosols for climate modelling, he explains.

According to the researchers, modelling shows that their technique could be used to create detectors that measure more than 15 spectral channels with wavelengths from 400 to 750 nm. But Altaqui notes that they do not currently have the required retarder materials to produce 15 spectral bands. They are now working to further shrink the sensor and incorporate additional mantis shrimp eye features.

Has a new particle called a ‘leptoquark’ been spotted at CERN?

A hint of the possible existence of a hypothetical particle called a leptoquark has appeared as an unexpected difference in how beauty quarks decay to create electrons or muons. Measured by physicists working on the LHCb experiment on the Large Hadron Collider (LHC) at CERN, the difference appears to violate the principle of “lepton universality”, which is part of the Standard Model of particle physics. The measurement has been made at a statistical significance of 3.1σ, which is well below the 5σ level that is usually considered a discovery. If the violation is confirmed, it could provide physicists with important clues about physics beyond the Standard Model – such as the existence of leptoquarks.

When high-energy protons are smashed together at the LHC large numbers of exotic particles are created, including some containing the beauty quark. These exotic particles quickly decay, and beauty quarks can follow decay paths that involve the production of either electrons or muons, which are both leptons. According to the Standard Model of particle physics, the interactions involved in producing leptons do not discriminate between lepton type, so the rates at which electrons and muons are created by beauty-quark decays are expected to be the same.

Starting in 2014, physicists working on LHCb noticed hints of the violation of this lepton universality. Now, after analysing collision data collected between 2011 and 2018, the researchers have found that the beauty quark appears to favour the electron decay chain over the muon decay chain.

New particle

The decay process involves the conversion of a beauty quark into a strange quark with the production of an electron and antielectron or a muon and antimuon. The Standard Model predicts that this occurs via electroweak bosons and the W+ and Z0 particles. However, violation of lepton universality suggests that there may be other ways for this to happen. One tantalizing explanation is the existence of a hypothetical particle called a leptoquark, which is a massive boson that couples to both leptons and quarks. In principle, leptoquarks could have different coupling strengths to electrons and muons.

While a new particle is an exciting proposition, physicists will have to wait until LHCb gathers more data in upcoming runs of the LHC to confirm the violation of the Standard Model. Team member Nicola Serra of the University of Zurich says “it is too early to draw a final conclusion. However, this deviation agrees with a pattern of anomalies that have manifested themselves over the last decade. Fortunately, the LHCb collaboration is well placed to clarify the potential existence of new physics effects in these decays. We just need many more related measurements in the future.”

The research is described in a preprint on arXiv.

End-to-End QA with the QUASAR™ Multi-Purpose Body Phantom: TPS and SBRT

Want to learn more on this subject?

The QUASAR™ Multi-Purpose Body Phantom is a flexible QA tool designed to perform comprehensive testing recommended by AAPM TG 53/66/76 and IAEA TECDOC- 1583.

The phantom incorporates a wide variety of test objects in a solid body oval. Designed to perform end-to-end QA on Simulation, Treatment Planning and Treatment Delivery Systems; the Multi-Purpose Body Phantom is a comprehensive solution for today’s physicist. Users can also increase testing versatility by adding motion to the phantom with the addition of a QUASAR™ Respiratory Motion Assembly or Motion Platform.

This webinar, presented by Joanne Tang, will provide an overview of the advanced features and added motion capabilities of the QUASAR™ Multi-Purpose Body Phantom, highlighting both its versatile testing applications and its value to the SBRT workflow.

Want to learn more on this subject?

Joanne Tang is an application specialist at Modus QA. Having joined Modus QA after completing her BSc and MSc in medical biophysics at the University of Western Ontario, she is currently involved in customer application support of the QUASAR™ Multi-Purpose Body Phantom.

 

 

 

Proton radiography: one step closer to clinical use

Protons. Destroying cancer cells more precisely than X-rays. Depositing less dose in healthy tissues. Verifying treatment plans and improving patient alignment?

Christina Sarosiek, a graduate student at Northern Illinois University, is working on that.

“The goal of our project is to make proton therapy safer and more effective using an imaging modality called proton radiography,” she explains. “We’re able to take a picture of the tumour directly before treatment and therefore know that we’re irradiating the tumour and not healthy tissues.”

Creating radiographs with protons

Like all medical therapies, proton therapy has some uncertainties. Changes in patient anatomy between treatments, small misalignments of the patient, or errors in the calibration from a planning CT scan to a proton treatment plan, for example, can all lead to underdosing a tumour or delivering dose to healthy tissues, neither of which are optimal.

Christina Sarosiek

Sarosiek is part of an interdisciplinary team that is developing and characterizing a prototype proton radiography system that not only improves upon methods used today, but also may help scientists and clinicians tackle all of these other challenges.

For example, medical physicists can verify proton range in vivo using a range probe, which works by passing a low-dose, high-energy proton pencil beam (a very thin beam) through a patient and comparing the measured integral Bragg peak with that from a planning CT. But a range probe is limited because it doesn’t provide any spatial information and can’t improve patient alignment.

Proton radiography, on the other hand, works by sending very high-energy but low-intensity protons through a patient and then reconstructing an image based on the resulting data, which represents, pixel-by-pixel, the water-equivalent thickness – basically, how far a proton would have travelled if it were in water. The source of image contrast in a proton radiograph is the energy loss of the transmitted protons (the integrated stopping powers of protons in the patient).

“We’re getting an image of the integrated energy through an entire patient,” Sarosiek says. “If the anatomy changes or the [electron] density changes on the way to the tumour, we would see that appear as a difference in the full integration through the patient.”

To put this another way, a proton radiograph would tell a medical physicist or clinician if, but not precisely where, there was a difference from what was planned.

“Proton radiographs can alert us to range discrepancies in the plane perpendicular to the beam, but a single proton radiograph cannot inform us about exactly where along the beam path (that is, proximal or distal to the tumour) that discrepancy lies,” Sarosiek explains.

Image quality sufficient for pre-treatment range verification

Sarosiek and the team characterized their prototype proton radiography system using several different phantoms. They published the results of these studies in Medical Physics.

The proton radiography system and the team’s reconstruction algorithm produced images with high enough spatial resolution and image quality to help align a patient better right before their proton treatment starts.

Proton radiographs

Results also illustrated that the system could be used to help clinicians with quality assurance, by detecting errors in treatment plans resulting from changes in patient density between a treatment planning CT and the proton treatment plan. These applications might ultimately help clinicians and medical physicists reduce margins in treatment planning.

Sarosiek says that their results are comparable to those in other studies, which rely on custom proton radiograph systems. One advantage of the system studied by Sarosiek and the rest of the team is that it is currently being optimized and commercialized by an industry collaborator.

An integrated proton imaging and treatment delivery system

“In the short term, we know [our proton radiography system] works. The long-term impact, I think, is that we may be able to use the same modality for imaging and therapy,” Sarosiek says.

Currently, proton therapy treatments are planned using an X-ray-based CT scan. CT images are displayed in Hounsfield units, which represent a transformation of the X-rays’ attenuation coefficients. Protons interact with tissue and deposit dose differently, and this presents an additional source of error for proton therapy.

Researchers are also working on limiting these errors. Several proton radiographs may be used to create patient-specific curves comparing CT dose to relative proton stopping power (RSP). The downside, though, is that thus far, studies looking into this are based on simulated data, Sarosiek says.

Another avenue being pursued is proton CT. Proton CT would allow medical physicists and clinicians to create a treatment plan directly using the proton CT and avoid a calibration curve like the CT-to-RSP curve examined by other groups. Sarosiek’s collaborators will be investigating this in the future.

For now, though, Sarosiek and the rest of the research team are focused on one of the immediate limitations of proton radiography before it can transition into clinical use.

“One of the major limitations [of this approach] is that for proton radiography, we require a high-energy, low-intensity proton beam. But clinical proton treatment beams have much higher intensity with lower energy,” Sarosiek says. That means that any integrated proton imaging and treatment delivery system would need onboard beam monitoring systems that analyse low-intensity imaging beams, ensuring that dose to the patient remains low.

Once this problem is solved, proton radiography can enter clinical practice, she says.

Porous carbon aerogels might power future Mars missions

Lightweight composite materials containing more than 99% air could prove key to powering future space missions. The materials, known as porous carbon aerogels, make up the electrodes of a supercapacitor developed by researchers at the NASA-sponsored Merced nAnomaterials Center for Energy and Sensing, the University of California, Santa Cruz (UCSC), the University of California, Merced, and the Lawrence Livermore National Laboratory. The device’s ability to operate at extremely cold temperatures could also make it a good power source for polar expeditions on Earth.

Many spacecraft require heating systems to operate in their inhospitable environment. NASA’s Perseverance Rover, for example, recently began a two-year mission to look for signs of ancient microbial life on Mars, where the average temperature is –62 °C, dropping below –125 °C in the winter. Onboard heaters keep the electrolytes in the rover’s batteries from freezing, but the heaters and the energy sources required to power them add weight to the spacecraft payload.

In-between capacitors and batteries

In the trade-off between charge/discharge speed and energy storage capacity, supercapacitors – or, more accurately, electric double-layer or electrochemical capacitors – fall somewhere between batteries and conventional (dielectric) capacitors. Though less good at storing charge than batteries, supercapacitors are better than conventional capacitors in this respect thanks to their porous electrodes, which have surface areas as large as several square kilometres. The double layer that forms at the electrolyte-electrode interface of such devices when a voltage is applied further increases the amount of charge they can store.

Supercapacitors also have some advantages over batteries. They can charge and discharge in minutes – unlike batteries, which take hours. They also have a much longer lifespan, lasting for millions of cycles rather than thousands. And unlike batteries, which work through chemical reactions, supercapacitors store energy in the form of electrically charged ions that assemble on the surfaces of their electrodes.

Hierarchical channels

Building on their previous work, the researchers, led by Jennifer Lu of UC Merced and Yat Li of UCSC, used a 3D printing technique called direct ink writing to make their supercapacitor electrodes. They made the ink by combining cellulose nanocrystals (which provide carbon) with a suspension of silica microspheres. The latter serve as a hard template for creating macropores in the lattice structure of the aerogel once it has been freeze-dried.

The pores in the aerogel lattice vary in size from 500 microns to just nanometres, creating a hierarchical structure of channels. These channels significantly increase the rate at which the ions in an electrolyte diffuse through the material, while also minimizing the distance they need to travel.

Advantages over other supercapacitors

The team’s 3D multiscale porous carbon aerogel has a surface area of around 1750 m2/g, and tests show that an electrode made from the material has a capacitance of 148.6 F/g when a voltage of 5 mV/s is applied. The researchers say that this is higher than most other low-temperature supercapacitors.

The team also demonstrated that a device containing this electrode allows for ion diffusion and charge transfer at temperatures as low as –70 °C. To compare, the lowest working temperatures of commercial lithium-ion batteries and supercapacitors are typically around –20 °C to –40 °C – values that are limited, as mentioned, by the freezing point of the electrolytes.

The team will now collaborate with scientists at NASA to further characterize the devices’ low-temperature performance. “We will do this by testing them in environments that mimic those of the Moon, Mars and international space stations,” Lu tells Physics World.

The present research is detailed in Nano Letters.

Was a passing satellite mistaken for a distant gamma-ray burst?

There is growing concern among some in the astronomy community that a mysterious flash of light that was seen coming from the direction of a distant galaxy may have simply been a glint from an artificial satellite orbiting Earth.

The target galaxy GN-z11 has a redshift of 11.09, which places it about 400 million years after the Big Bang. It is the earliest known galaxy and its place in cosmic history makes it an important object for astronomers to study because it can tell us about star-forming conditions in the early universe.

While observing the galaxy in December 2020 with the MOSFIRE spectrometer on the Keck I telescope in Hawaii, a team of astronomers led by Linhua Jiang of the Kavli Institute for Astronomy and Astrophysics at China’s Peking University got a surprise. MOSFIRE is a multi-slit infrared spectrometer and a flash of infrared light was seen in one of the slits targeting GN-z11. Describing the flash in Nature Astronomy, the astronomers conclude that it could have been redshifted light associated with a long-duration gamma-ray burst (GRB) from the ancient galaxy.

If their findings prove to be correct, it would be the first GRB to be seen in the early universe and could therefore harbour important information about the nature of early, massive stars and their environments.

Much closer to home

However, not everyone is convinced, with several researchers now claiming the flash was nothing more than light reflecting from a passing satellite. Although Jiang and team were careful to try and rule out satellites, some scientists believe they have underestimated the probability of detecting a satellite in the MOSFIRE observations.

“The satellite explanation is, at a minimum, thousands of times more likely than the GRB explanation,” says Charles Steinhardt of the Niels Bohr Institute at the University of Copenhagen.

Along with colleagues in Copenhagen and the University of Geneva, Steinhardt and colleagues set about looking to see whether previous MOSFIRE observations had also seen unusual flashes. From a search of 12,300 exposures, they found at least 27 other flashes, and attribute these to high-orbit satellites passing through the field of view.

“There are about 6000 known satellites and there are 41,253 degrees in the full sky,” says Steinhardt. He calculates the probability of a satellite passing over any given area of sky as being of the order of 10-3, or about one every seven square degrees. In comparison, he puts the probability of witnessing a GRB as between 10-8 and 10-10.

Around midnight

In response to these claims, Jiang’s team points out that low-Earth orbit (LEO) satellites are only visible around sunset and sunrise. The GN-z11 flash was seen around midnight, ruling out the satellites in LEO. A group led by Guy Nir, of the Weizmann Institute of Science in Israel, has suggested that the flash could have been a glint from a tumbling satellite in high orbit. Nir has himself seen these glints in his own observations, suggesting that they may have been confused in the past “with cosmic rays, fast-moving asteroids, or remain a mystery.”

Nir calculates that there could be 200 glints per day coinciding with the locations of galaxies on the sky. Jiang’s team counters this in two ways. One, that high-altitude or geosynchronous satellites would either produce an extended spectrum in the MOSFIRE observations as opposed to the compact spectrum of the GN-z11 flash, or not be visible in the direction of GZ-z11 from Mauna Kea. And two, it argues that Nir’s group has over-estimated the amount of sky taken up by galaxies, although Nir believes that even if his group has over-estimated, it is still not enough to make the GRB explanation more likely.

A potential culprit

However, not everything in Earth-orbit follows a predictable, deliberate orbit. A team led by Michał Michałowski at the Adam Mickiewicz University in Poland believes it has identified the object that could have caused the GN-z11 flash: an upper stage of an old Russian Proton rocket. Using Space-Track.org, the team showed that it passed as close as 18 arcseconds to GZ-z11 on the sky at the same time that MOSFIRE observed the flash.

Nir believes that Michałowski and colleagues have the right explanation, “There is certainly no need for a GRB, or even a glint off a rotating satellite, when there is a known piece of space junk right where the flash occurred.”

Crucially, points out Steinhardt, the GN-z11 flash displayed characteristics consistent with the solar spectrum, as though sunlight were being reflected. This is the same spectrum displayed by the 27 other MOSFIRE glint events that his team found in the archive data.

However, Jiang disagrees, telling Physics World that “we ruled out this possibility in our original analysis, and the line width [the length of the spectrum] is not consistent with what we observed.”

Counting the odds

Ultimately, says Steinhardt, it comes down to probability. If the probability of seeing a satellite in any given area of sky is on the order of 10-3, then for every thousand MOSFIRE observations, on average one satellite should be seen.

Satellites are being launched in greater numbers and keeping track of them all is becoming increasingly problematic. According to Nir, code can be written to remove known satellites from observations, and the scheduling of telescope time can be designed to avoid satellites, but incomplete tracking of satellites and space debris could lead to further contentious detections in the future.

Space hurricane observed in the Earth’s upper atmosphere

A space hurricane – complete with electron “rain” – has been detected in the Earth’s upper atmosphere for the first time, an international team of researchers has reported. With the requisite plasma and magnetic fields needed for such storms present in the atmospheres of planets across the universe, the researchers suggest that such phenomena should be commonplace.

The hurricanes with which we are more familiar form in the Earth’s lower atmosphere over warm bodies of water. As warm, moist air rises, it creates a pocket of low pressure near the ocean’s surface, which in turn sucks in the surrounding air, generating strong winds and creating clouds that lead eventually to heavy rainfall. As a result of the Coriolis effect, the inward rushing air is deflected on a circular path – forming the characteristic spiral shape of a tropical storm.

Hurricanes have also been spotted in the lower atmospheres of our neighbouring planets of Mars, Jupiter and Saturn, while similar phenomena – so-called “solar tornados” – have even been spotted churning the surface of the Sun. However, such swirling masses had never before been detected in the upper atmosphere of a planet.

The space hurricane in question was recorded above the North Pole, some several hundred kilometres up into the ionosphere, back in August 2014 by four satellites in the US Defense Meteorological Satellite Program. However, it was only revealed in the data by recent retrospective analysis led by researchers from China’s Shandong University.

Using three-dimensional magnetospheric modelling, the team was able to create an image of the phenomenon – a swirling, 1000-km-wide funnel composed not of air, but plasma. It rotated around in an anti-clockwise direction, sported multiple spiral arms, had a calm “eye” at its centre and lasted for a duration of around eight hours before gradually breaking down.

“Until now, it was uncertain that space plasma hurricanes even existed, so to prove this with such a striking observation is incredible,” says paper author and space scientist Mike Lockwood of the University of Reading. “Tropical storms are associated with huge amounts of energy, and these space hurricanes must be created by unusually large and rapid transfer of solar wind energy and charged particles into the Earth’s upper atmosphere.”

Based on their model, the team believe that the phenomena formed as the result of interactions between incoming solar wind and the Earth’s magnetic field. Notably, the hurricane appeared during a period of low solar and geomagnetic activity – with the interplanetary magnetic field pointing northward – suggesting that such hurricanes may be frequently occurring phenomena in the atmosphere of both Earth and other planets.

Schematic of the space hurricane

“Vorticity is well-known to be associated with field-aligned current flows, but it is intriguing to see such an intense current during northward interplanetary magnetic field, when one would generally expect the currents flowing to be smaller,” comments John Coxon, a space physics researcher from the University of Southampton who was not involved in the present study.

As radar observations can directly measure the plasma flow speed from the ground, “it will be interesting to see whether radars see the large-scale vorticity that the authors report, and if not, why that might be,” Coxon adds.

“We have known for a while that interesting energetic interactions such as the ones described in the paper also exist during northward interplanetary magnetic field, but these are often overlooked as unimportant,” says Maria-Theresia Walach, a solar terrestrial physicist from Lancaster University, who was also not involved the study. But she questions the name chosen by Harwood’s team. “The phenomena that has been observed here is not new, so from a scientific perspective I find renaming it a ‘space hurricane’ not useful, despite being catchier than ‘high-latitude dayside auroral [HiLDA] spots’.”

Nevertheless, she says, “this study shows a very nice case-study of some of the interactions between the solar wind, the magnetosphere and the ionosphere at Earth.”

Lockwood, however, disagrees with this interpretation. “I have no doubt that the auroral spot at the centre of the event in our paper is what has been called HiLDA event, but this paper is only marginally about the auroral spot,” he tells Physics World. “What marks this particular event out is its longevity, the spiral arm structure that forms in the field aligned currents and aurora, the extremely large energy deposition at a time of minimal geomagnetic activity, and the lobe reconnection extending unusually far onto the nightside because of the unusual combination of interplanetary conditions.”

While the space hurricane would have had little tangible impact down on the Earth’s surface, the electron precipitation from such storms in the ionosphere does have the potential to disrupt communications, GPS satellites and radar operation, as well as potentially altering the orbital patterns of space debris at low orbital altitudes. This, the researchers concluded, highlights the importance of continued and improved monitoring of space weather.

The study is described in Nature Communications.

Is the answer to climate change lying beneath your feet?

About 15 years ago I bought a house in Cornwall. It wasn’t any ordinary house but a “future-technology” building that had been fitted with a ground-source heat pump and underfloor heating. It was a fantastic place to live in – well insulated and always cosy. As soon as my family moved in and we had unpacked, I simply had to find out how it all worked.

Built in 2001, the house had been fitted with a “GeoKitten” ground-source heat pump made by a local Truro-based firm called Kensa, which pioneered the adoption of the technology in the UK. The pump is a bit like a fridge that works in reverse, pulling heat in from liquid circulating through coils buried underneath my front garden. Coupled to the underfloor heating circuit, the pump made the house a fabulous place to live in, especially when winter storms were raging outside in Falmouth Bay.

Lovely warmth was sent by the pump around the house, with every room having an adjustable flow rate and precise temperature control

Lovely warmth was sent by the pump around the house, with every room having an adjustable flow rate and therefore precise temperature control. But as it was the only form of heating, I was expecting a massive electricity bill. Turns out heat pumps are incredibly efficient, with the coefficient of performance (COP) – the ratio of heat output to electrical-energy input – being about 3–4.5. My fears were unfounded.

In fact, Kensa has been a great UK success story. Founded in 1999 by two former marine engineers who’d spent years installing heat pumps on luxury yachts in the Mediterranean, the company’s products are now used in many domestic and commercial settings. Realizing they were local, I once called the owners, who kindly showed me round their factory. It was small back then, but the firm has grown and expanded. David Cameron even visited in 2009 before becoming prime minister.

Too hot to handle

Ground-source heat pumps are a great way of heating. Apart from being quiet and cheap to run, they need little maintenance and last for ages. Heat pumps will help the shift away from traditional gas- and oil-fired boilers – in fact, the UK’s recent 10-point plan for a “green industrial revolution” envisages 600,000 heat pumps being installed in Britain every year by 2028. That’ll save the equivalent of 71 million tonnes of carbon-dioxide emissions (16% of the country’s total).

Ground-source heat pumps are a great way of heating. Apart from being quiet and cheap to run, they need little maintenance and last for ages

First developed in the 1940s by an American inventor called Robert C Webber after a bizarre incident in which he burned his hand on the hot outlet pipe from his domestic deep freezer, there are now countless ground-source heat pumps around the world. Study after study has shown that such systems are both effective and efficient. The reason why they’re not the number-one choice for heating buildings is simple: the up-front costs are high.

A conventional domestic gas boiler uses about 20 kW of power and costs £50 per kilowatt, whereas a comparable ground-source of 17 kW costs more like £500/kW. Of course, if your house is well insulated you need less capacity. Costs will also fall over time as demand for such heating systems increases (regulation will be vital for that to happen). I suspect that  the eventual mature market price of ground-source heat pumps will be around the £100–£200/kW mark.

That’s roughly similar to the cost of air-source heat pumps, which transfer heat from the outside air to the inside of a building and are likely to be the main competitor to ground-source pumps if gas boilers are banned. Ground-source heat pumps are harder to install. Installing the coils (or “slinkies”) in the ground has to be carefully done. You also need to dig quite a few large holes, which is tricky for anything other than a new or refurbished property.

But if your house is well insulated, as it should be, then you can expect to save money compared to an LPG boiler within seven years. Unfortunately, compared to a mains-gas system, there’s no payback at all over that time with a ground-source heat pump. Gas, despite being a carbon-dioxide-producing fossil fuel, is currently so cheap.

And there’s the rub. People are generally poor at making long-term decisions based on a potential future payback. If it’s a choice between saving money now or in the future, most of us can only think of the here-and-now. And unless customers want to pay extra for environmentally friendly systems, house builders have no reason to invest in them. That’s why new regulations, such as those outlined in the UK’s 10-point plan, are so vital. They will force the construction industry towards this green technology.

Zeroing in 

The beauty of ground-source heat pumps is that they can be fitted even in built-up areas where space is at a premium. A great example is the new headquarters of the Institute of Physics (IOP) in King’s Cross, London. It won a CIBSE building-performance award in 2020 for its green technology, which includes photovoltaics, LED lights and an advanced ground-source heat pump consisting of 10 vertical boreholes extending 60 m below the building – the first time such a system has been used anywhere in the UK.

After its first full year of operation, the system had a COP rating of 3.4 – a figure that’s likely to improve still further as the system gets tweaked and optimized. It’s a success story for the IOP, proving that the physics community is leading by example. I believe green technology and growth go hand in hand, and that, with sufficient focus, the UK can meet its net-zero-carbon goal by 2050. Thanks in part to heat pumps, we have the means to tackle the threat of climate change, which is one of the most enduring dangers that we face.

Quantum mechanics gives new insights into the Gibbs paradox

Entropy has been a subject of debate among physicists ever since it was formulated in classical thermodynamics some 150 years ago. One such debate centres on the so-called Gibbs paradox, in which the entropy of a system seems to depend on how much an observer knows about it. Astounding and confounding the physics community when it was first put forward by the American physicist Josiah Willard Gibbs in 1875, the paradox has since found numerous resolutions, albeit mainly in the classical setting with ideal gases.

Researchers at the University of Oxford and the University of Nottingham, UK, have now shed light on what the Gibbs paradox may look like in the quantum realm. By leveraging quantum effects, they show that more work can be extracted from a system than would be possible classically. Their result lays the theoretical groundwork for an experimental demonstration in the future, and could have applications in the burgeoning effort to manipulate large quantum systems.

The classical Gibbs paradox

The classical Gibbs paradox takes the form of a thought experiment involving a box with a partition that separates two bodies of gas. When the partition is removed, the two gas bodies mix spontaneously. To an informed observer who can distinguish the two gas bodies, the system’s entropy increases. On the other hand, for an ignorant observer who cannot discern any differences between the two gas bodies, there is no visible mixing and the entropy remains unchanged.

This difference of opinion has a physical significance since work can be extracted through the mixing process when the entropy increases. That suggests that the system’s entropy should be an objective quantity – something that does not reconcile with the existence of the different outcomes for the two observers. Gibbs, however, noted that the extraction of work depends on the experimental apparatus of the observer. Hence, the informed observer can extract work, whereas the ignorant observer has to contend with their inability to do so. This makes each observer’s reality consistent with the entropy change they witness.

Quantum effects

In the new work, the Oxford-Nottingham team considered how quantum effects such as superposition would affect the thought experiment. As in the classical case, the informed observer witnesses an entropy increase. For the ignorant observer, however, there is a marked difference after transitioning to the quantum realm. Although they are still unable to distinguish the two gases, they, too, can now witness an entropy increase. At the macroscopic limit, this entropy increase can even become as large as that which the informed observer perceives, providing the maximal discrepancy to the classical case.

Though the result might seem surprising at first, the researchers behind it say that it is a stark reminder that the classical limit is not always the same as the macroscopic one. “The classical limit is not just about large particle numbers, but also about limited degrees of control,” Benjamin Yadin, Benjamin Morris and Gerardo Adesso explain in an e-mail to Physics World. By giving the ignorant observer complex control over microscopic quantum degrees of freedom, they add, that observer becomes able to derive quantum effects at the macroscopic scale.

While some consider quantum mechanics to be a resolution to the classical Gibbs paradox, Yadin, Morris and Adesso note that their result indicates otherwise. “Our work shows that quantum effects can add an additional layer of seemingly paradoxical behaviour,” they say. They emphasise that their result is impossible in classical physics, as it relies on the symmetry requirements of bosons and fermions – a property not found in classical mechanics.

The researchers are now working on a proposal for demonstrating this effect experimentally. They explain that doing so requires a degree of quantum control, which may be possible in optical lattices and Bose-Einstein condensates. In the long term, they believe it could be possible to use this theory to build an effective quantum heat engine, one that could operate in regimes where a classical heat engine would fail.

“The question of how quantum features of identical particles may be harnessed for thermodynamical advantages is currently gaining a lot of interest – and we would like to see our work inspire other novel ideas in this area,” Yadin, Morris and Adesso conclude.

The research is reported in Nature Communications.

Copyright © 2025 by IOP Publishing Ltd and individual contributors