Skip to main content

Irregularly shaped Moon dust creates complex scattering effects

The Moon’s surface is covered with tiny rock grains that formed during eons of high velocity meteorite impacts. The shape of these grains affects how the lunar surface scatters light, and researchers in the US have now analysed these shapes in unprecedented detail. The results of their study – including the first computations of the optical scattering properties of nanosized Moon dust – should make it possible to create better models of the colour, brightness and polarization of particles on the Moon’s surface, and to understand how these quantities change as the Moon goes through its phases.

Researchers have been studying Moon dust ever since the first samples were brought back to Earth during the Apollo 11 mission in 1969. Initial reports found that the average size of particles in the lunar soil, or regolith, is around 50 μm, with only 14% of particles measuring less than 10 μm across. Newer measurements, however, indicate that submicron particles are also present, including many particles in the 100 nm to 1 μm range.

In either case, the dusty regolith is fundamentally different to soils found on Earth, says Jay Goguen, a senior researcher at the Space Science Institute in Boulder, Colorado and a co-author of the study. Lunar dust particles are also responsible for some unusual visual phenomena. When these particles become electrostatically levitated into the tenuous lunar exosphere, sunlight scatters off them, producing effects that the Apollo astronauts experienced as streamers, horizon glow, zodiacal light and crepuscular rays.

Measuring particle shape

In the new experiments, Goguen and colleagues at the US National Institute of Standards and Technology (NIST), the University of Missouri-Kansas City and the Air Force Research Laboratory measured the shape of particles between 400 nm and 1 μm in size using a method called X-ray nano computed tomography (XCT).

Like other forms of CT scanning, this non-destructive imaging technique uses X-rays to generate images of cross-sections (2D slices) of a 3D object. In-house software allowed the team first to build up a 3D image from such slices, and then to convert this data into a format where units of volume (or voxels) are classified as being either inside or outside the particles. From these segmented images, the researchers identified the 3D particle shapes and fed the voxels making up each particle into an open-source electromagnetic solver called Discrete Dipole Scattering (DDSCAT), which they used to compute the light scattered from each particle in the visible to infrared frequency range.

Infinite irregularities

The team focused on these small, wavelength-scale particles because of the crucial role they play in determining the intensity and polarization state of light scattered from the lunar surface (and indeed from the surface of other planetary bodies). Previous studies have shown that the way lunar dust particles scatter light depends on their size, shape, composition, surface roughness and how densely they are packed together. While these earlier studies tried to account for irregular particle shapes (using, for example Gaussian random shape generators for computational simulations), Goguen says the actual 3D morphologies of these particles were often overlooked.

“There are an infinite number of ways that a particle shape can be ‘irregular’”, he explains. “The goal of our new study was to use experimentally measured 3D shapes of Moon dust particles collected by Apollo 11 and computationally analyse how these specific measured particle shapes scatter light.”

Highly sensitive to shape

Thanks to this approach, the researchers were able to link the shape of the particles to their optical scattering characteristics with greater accuracy. Their results show that the frequency of light most efficiently scattered by a dust particle – the resonance wavelength – is highly sensitive to the particle’s shape, Goguen says. “For the lunar dust grain shapes, the resonance wavelength is 20% smaller than that for equivalent sized spheres,” he tells Physics World. “The lunar dust shapes also scatter light slightly more forward (towards the direction of propagation) than the spherical grains.”

The researchers, who detail their work in IEEE Geoscience and Remote Sensing Letters, now plan to study a wider range of shapes and sizes of lunar particles, including some that are more representative of the lunar highland visited during the Apollo 14 mission.

Arm Holdings faces uncertain future – and why it matters to you

Did you know that your smartphone has anything between 20 and 30 processors inside it designed by a British company? Arm Holdings doesn’t build the chips, but licenses its technology to virtually every semiconductor company on the planet. By the end of 2020, an estimated 160 billion processors had been built with its intellectual property (IP). They’re used in everything from your phone’s main processor to the WiFi chip, Bluetooth and even the battery charger.

To find out how this came to be and why it matters, let’s wind back to the late 1970s when computers were still big, expensive, remote devices. I was at school at the time and remember the faff of posting off punch cards to a distant computer for processing. They’d come back in the mail two weeks later – invariably with a syntax error on line one so you’d have to start again from scratch. You can imagine the magical feeling when my school bought its first computer – a Commodore PET – in 1979.

Arm Holdings’ chips are used in everything from your phone’s main processor to the WiFi chip, Bluetooth and even the battery charger

So when a device called the ZX80 came on the market in 1980 for just £99 – an order of magnitude less than the PET – I simply had to get my hands on one. Launched by the entrepreneur Clive Sinclair, it was the first of many home computers sold in Britain and was followed in 1981 by the BBC Micro. That machine was bought in huge numbers and even had an accompanying BBC TV show, which explained to an eager public how computers would change the way we live and work.

The tender to build the BBC Micro had been won by Acorn Computers, a company co-founded in Cambridge in 1978 by the inventor Chris Curry and the physicist Hermann Hauser, who had just completed his PhD at the Cavendish Laboratory. Acorn gave the BBC exactly what it wanted for the TV series, prototyping a machine in just five days. More than 1.5 million units were eventually sold, helping Acorn to reach a turnover of £100m by 1983. The company was floated on the UK stock exchange that year with a market capitalization of £135m, with Hauser’s and Curry’s stakes in it worth £64m and $51m respectively.

Going for growth

Acorn was, however, keen to build serious business machines that could take on the Apple II and the ubiquitous IBM PC, which had dominated the business market since its launch in 1981. Looking for a suitable microprocessor, Acorn approached Intel to use its 286 processor but didn’t like the package pin-out, so asked to buy the chip only. When Intel refused, Acorn decided to build its own chip.

Two Acorn engineers – Steve Furber and Sophie Wilson – decided to focus on “reduced instruction set computer” (RISC) processors, the ideas for which had been developed by John Hennessy at Stanford University and David Patterson at the University of California, Berkeley. Smaller and faster than conventional “complex instruction set computer” (CISC) processors developed by the likes of Intel, RISC processors needed more memory and a more complex compiler. But with memory costs falling quickly, Acorn decided to adopt the new approach and the Acorn RISC Machine (ARM) project began in October 1983.

Hermann Hauser has written an open letter to the British prime minister, strongly objecting to the acquisition

The first ARM microprocessor chip was tested in April 1985, requiring less than 5% of the power of a comparable CISC processor, which meant it could be put in a cheap plastic package without overheating a PC. Later, with Apple wanting to use ARM in its forthcoming Newton mobile device – but not wanting the chip produced by a competitor – Acorn decided to spin out ARM as a joint venture with Apple and another US firm (VLSI Technology) in 1990.

Renamed Advanced RISC Machines, it developed a business model in which other companies were allowed to build ARM chips by paying an upfront cost and a modest royalty fee per chip (usually a few per cent). Rivals could therefore manage their own supply chains, knowing they had access to advanced and widely used processor technology under reasonable terms. Attracted by the low power consumption, Nokia (then the market leader in mobile phones) used the ARM chip, as does Apple in its iPods, iPhones, iPads and, more recently, its computers too.

Where next?

In 1998 ARM floated on the London stock exchange and became the world’s most successful licensing firm, dealing even-handedly with every semiconductor manufacturer; it’s the Switzerland of semiconductors as Hauser puts it. Eventually, in 2016 Japanese telecoms firm SoftBank Group bought ARM for £23.4bn, seeking to become the leader in the “internet of things” and machine learning. There was some resistance to the sale from regulators, but ARM’s neutral “Switzerland” status was still retained, so the deal went ahead (with ARM rebranded Arm in 2017).

All seemed to be going well until the US chip maker Nvidia – one of Arm’s licensees – announced plans last year to buy Arm from SoftBank for $40bn. Most other licensees oppose the deal, not least because Nvidia is American, which means that Arm could be restricted from, say, Chinese companies if the US government so decreed. Hauser has already written an open letter to the British prime minister, strongly objecting to the acquisition.

The UK’s Competition and Markets Authority (CMA) is investigating the proposed takeover, focusing on the potential impact on competition in the UK and whether Arm, if sold, would have an incentive to “withdraw, raise prices or reduce the quality of its IP licensing services to Nvidia’s rivals”. I am expecting that the CMA will block the deal, given the huge opposition from almost all licensees and sovereign nations. But if the acquisition does go ahead, Arm’s business model may be damaged and its technology, I fear, may be turned into a potential political weapon. I hope they make the right decision.

Artificial intelligence technologies can reinforce inequalities

Computer technologies are often viewed as inanimate tools for improving our lives. Yes, there have always been issues around access, and there have always been some people who have used computers for harmful purposes. But the technology itself has always been considered to be free from human biases. This video explains why the concept of computer neutrality is no longer feasible.

We’re living in an age of digital information, where algorithms underpin many aspects of our lives. Increasingly, private companies and public authorities are using machine learning and artificial intelligence (AI) to make processes more effective. But in practice, the technologies are built by humans so their designs and functioning can reflect existing inequalities in society.

Physicists also use AI and machine learning in academic research, and many tech companies hire physicists. For these reasons, physicists can play a key role in better understanding the issues and regulating these emerging technologies. To find out more take a look at the article “Fighting algorithmic bias in artificial intelligence“, originally published in the May 2021 issue of Physics World.

Graphene oxide fibres fuse and fissure

Researchers in China have succeeded in assembling graphene oxide fibres using a process normally only seen in biological systems. The new process, which mimics cellular fusion and fission, could find use in applications such as the actuators or “artificial muscles” used in miniaturized medical devices, robotics and smart textiles.

Materials that respond to environmental changes in the same way that natural materials do are widely seen as ideal building blocks for emerging electronic devices. One natural mechanism that researchers are especially keen to mimic is biological self-assembly and, in particular, cellular fusion and fission. In fusion, two or more cells merge into one, while in fission they separate into two or more parts. Both processes are triggered by stimuli such as light, temperature or humidity.

Reversible solvent-triggered process

In the new work, Chao Gao of Zhejiang University and colleagues from Xi’an Jiaotong University began by assembling microfibres of graphene oxide (an oxidized version of carbon’s one-atom-thick form) using a technique called wet-spinning. The team chose this material because its super-flexible nature makes it relatively easy to wet-spin into fibres, while its oxygen functional groups make it chemically reactive. The resulting fibres have an outer “shell” that restricts the movement of the graphene oxide sheets.

When the researchers immersed the fibres in a suitable solvent, they found that the fibres self-assembled into a “hierarchical” yarn – that is, a yarn in which the same base structure repeats at different length scales – containing thousands of individual fibres. The team could also reverse this process by immersing the fibre assembly in water or polar organic solvents, thereby mimicking both parts of the biological fusion-fission cycle.

To understand what was happening at the scale of individual fibres, the researchers used optical and scanning electron microscopes to observe the fusion-fission processes. They found that when the fibres are placed in a water or polar organic solvents, they swell up, dramatically increasing in volume. The fibres’ elasticity is enhanced too, and the shape of the fibre shells reversibly switches between a wrinkled tube-like state and a flatter cylindrical state through swelling and deswelling. Gao and colleagues explain that this switching creates a transient fibre interface, leading to cyclic self-fusion and self-fission of an arbitrary number of graphene oxide fibres.

“A versatile strategy”

The researchers, who describe their work in Science, say that the fusion-fission behaviour they observed is a “versatile strategy” for designing functional responsive materials. Since graphene oxide fibres can easily be made to electrically conduct (via chemical reduction), the team argue that these fibres show promise for applications such as sensors, electronic components, smart textiles and actuators.

Rodolfo Cruz-Silva of Shinshu University in Japan and Laura Elias of Binghamton University in the US, who were not involved in the study, note that the new method is much less complex and involves fewer components than natural fusion-fission processes. Nevertheless, they argue in a related Perspectives article that the reversible assembly of graphene oxide fibres does indeed mimic nature, and thus “holds the refreshing potential to move the field forward”.

Gao and colleagues now plan to investigate the fusion-fission mechanism more carefully. “We also hope to explore applications in different areas,” Gao tells Physics World. The reversible fusion and fission property they discovered, he adds, “may help push forward the versatility of fibres like the ones we have studied”.

Lasers peer into a mysterious region of supercooled water

In an experimental first, scientists in the US have studied the dynamics of liquid water at temperatures below 230 K. Greg Kimmel and Loni Kringle of the Pacific Northwest National Laboratory in Richland, Washington used ultrafast laser pulses to “stop and start” the evolution of supercooled water in the nanoseconds before it froze, performing measurements in a temperature region that has been inaccessible to previous experiments. A paper describing their results is published in Proceedings of the National Academy of Sciences and suggests that the unusual properties of water might be attributed to the exchange of molecules between two coexisting liquid phases.

Water has more than 60 unusual properties that differentiate it from other liquids, including high heat capacity and a density that decreases upon freezing. There is evidence that these anomalies originate in the supercooled region, but despite decades of research, this remains unproven. Experiments on supercooled water are made almost impossible by a region between 160 and 230 K, which water researchers call “no man’s land”, where water crystallizes almost instantly.

Bridging the gap between theory and experiment

Kimmel has been studying supercooled water for more than two decades. Last year, Kimmel’s group showed, using an ultrafast laser heating technique, that water in the 160–230 K region always forms an equilibrium liquid before it crystallizes and that this liquid is a mixture of two structures, one high density and one low density.

Theorists have long predicted these two phases, but, as Kringle describes: “There is growing consensus that the anomalies of water, that are observable above 0 °C but become more pronounced upon supercooling, are related to the presence of these two structures but so little is known experimentally.”

The laser heating technique is fast enough that dynamics can be resolved as well as structure. As a supercooled water molecule moves through the liquid, it switches between high- and low-density motifs and the researchers wanted to know how this would affect the relaxation of the fluid.

Kringle and Kimmel entered this mysterious temperature region from below, heating amorphous (non-crystalline) ice at 70 K with laser pulses at rates of billions of degrees per second. The laser melted the water, but the heat dissipated after a few nanoseconds, cooling it so rapidly that the liquid structure was “locked in”. The fraction of low- and high-density water present after each laser pulse was measured with infrared spectroscopy, building up a series of “snapshots” of the liquid as it reached equilibrium.

Different densities, different dynamics

Like its liquid counterpart, amorphous ice can be high or low density. The researchers found that supercooled water relaxed to equilibrium more slowly if it started as low-density ice, though the final structures were similar. For both high- and low-density ice, they also found regions where the relaxation profile was a stretched exponential, which indicates molecules moving with lots of different speeds.

Whether a water molecule can switch between structures on its way to equilibrium depends on how easily it can navigate the potential energy landscape. The researchers developed a model of exchange between potential wells where they took the fractions of high- and low-density water at each stage of the experiments and calculated the switching rates that would keep the system at chemical equilibrium. They found that this model fitted the experimental data surprisingly well; reproducing the stretched exponential and predicting that a smaller number of deeper minima makes it more difficult for low density water to structurally evolve.

A new perspective on supercooled water

For their conclusions to be valid, the researchers need to show that the potential energy landscape of amorphous ice is equivalent to its liquid counterpart. New research on supercooled water is always contentious, but Kringle described her surprise at how well the potential energy landscape model fitted their data, saying “While it’s not perfect, it does provide a starting point for understanding how the transition between two structural motifs results in stretched exponential relaxation.” Certainty is hard to come by below 0 °C, but this research suggests a link between water’s dynamics and its unusual structure.

‘Keyhole surgery’ could reduce environmental burden of metal extraction

A new “keyhole surgery”-style mining technique could allow metals to be extracted from underground ore bodies without the need for vast physical excavations. The approach, which is based on electrokinetics and was developed by an international team of researchers, could reduce the environmental impact of mining while making deep ore deposits more accessible.

Industrial-scale mining is deeply damaging to the environment. Not only is the physical excavation of ore-bearing minerals highly energy-intensive, generating around 10% of all energy-related greenhouse gas emissions in 2018, it also produces incomparable quantities of waste. Globally, mining waste amounts to an estimated 100 gigatonnes per year, in the form of both overburden and the commercially useless “gangue” that surrounds valuable ore. This gangue is often highly toxic as well, meaning that disposing of it risks further environmental contamination.

“The current mining paradigm can be considered inherently unsustainable,” summarizes Rich Crane, a geochemist in the Cambourne School of Mines at the University of Exeter, UK, and an author of the new study. Demand for copper, for example, is expected to increase by 275–350% by the year 2050, yet freshly discovered deposits are increasingly of lower ore grade and found at greater depths. Mining these deposits in the traditional way would thus entail removing hundreds of metres of overburden.

Electrokinetic extraction

In their study, Crane and colleagues demonstrated an alternative approach based on electrokinetics (EK). This method, which is already used to extract metals from fly ash, soils and wastewater sludge, involves applying a direct current between two electrodes to drive the movement of dissolved charged species in the substance between, with metal ions flowing towards the cathode.

To adapt the method to work with intact, hard rock bodies, the researchers added an element of another technique, known as in situ leaching, which uses an acid to selectively dissolve the target metal from an ore deposit. In this way, metal might be recovered while bypassing the overburden and leaving most of the gangue in the ground. Then, once extraction is complete and the electric field is switched off, the acid is effectively “sealed” inside the rock – which remains essentially unchanged from a geotechnical perspective, minimizing the risk of subsidence.

“This new approach, analogous to ‘keyhole surgery’, has the potential to provide a more sustainable future for the mining industry,” Crane says. He adds that it could allow metal deposits to be recovered “while avoiding unwanted environmental disturbance and energy consumption”.

Proof-of-concept demonstration

In a laboratory-scale test of their approach, Crane and colleagues extracted 57 weight per cent of copper from a 4 cm-wide sample of low-permeability sulfidic porphyry ore. Though the full experiment took 94 days, 80% of the material recovery occurred in the first 50 days, at a relatively constant rate. The team’s numerical modelling suggests that in the field, metal could be recovered at comparable rate to that of traditional mining once the electrokinetic system was set up. The lead-up time would also be significantly reduced, as the need to remove substantial overburden would be replaced with simply drilling a grid of boreholes into which electrodes could be applied. According to Crane, the new process could be particularly cost-effective for ore deposits lying deep within the Earth’s crust or in areas where the storage of solid mine waste is, as he puts it, “problematic”.

“Application of EK to solid rock, rather than particulate soils or wastes, is certainly a novel approach,” says Mike Harbottle, a geoenvironmental engineer at Cardiff University, UK, who was not involved in the present study. However, Harbottle adds: “From experience in other applications there are plenty of challenges to come, not least the challenge of applying this sort of voltage gradient in the field and the resulting economic impact.”

Rodrigo Ortiz Soto, a chemical engineer from the Pontifical Catholic University of Valparaiso in Chile who was also not involved in the study, is more optimistic. “If this process is proven in field-scale experiments, it can shift the entire industry, and also can have applications in copper recovery from mine tailings and considerably extend copper availability,” he says.

The study is described in Science Advances.

Exploding stars alone cannot account for rapid heavy-element production, study reveals

Exploding stars alone cannot account for the abundance of heavy elements produced by the rapid neutron capture process, a new study has revealed. An international team of researchers, led by Anton Wallner at the Australian National University, came to this conclusion after analysing the abundances of plutonium and iron isotopes in a deep-sea crust sample. Their research suggests that other cataclysmic events, such as neutron star mergers, could be responsible for creating some heavy elements.

Elements heavier than iron form in astrophysical objects where nuclei are able capture neutrons in succession. For about half of the heavy nuclides, this neutron capture occurs slowly in stellar cores in what is called the “s-process” of nucleosynthesis. The other heavy nuclides – including actinides such as plutonium – are created rapidly in much more violent environments via the “r-process”.

Exactly where the r-process can occur is a subject of some debate. Some astronomers argue that it can only occur within certain types of supernovae (exploding stars), while others suggest that violent events such as merging neutron stars must be at least partially responsible for the heavy elements around us.

Wallner’s team has shed new light on this debate by analysing a core sample of Earth’s crust, taken from 1500 m below the surface of the Pacific Ocean. It contains a geological record spanning the past 10 million years and the researchers measured the abundance of two specific nuclides in the rock.

Iron and plutonium

One was iron-60, which is produced within the cores of massive stars, but is only ejected into space when the stars explode as supernovae. It has a half-life of 2.6 million years, so any iron-60 found in Earth’s crust must have been thrown out from supernovae relatively local to the solar system. The second nuclide the team looked for was plutonium-244, which can only be produced through the r-process. It has a half-life of 80.6 million years, so plutonium-244 in Earth’s crust can originate from far older, more distant events.

Within their sample, Wallner and colleagues detected two distinct influxes of iron-60, suggesting that two local supernovae occurred in the past 10 million years. Each of these events also deposited smaller amounts of plutonium-244, with similar ratios between nuclides for each event. Although the data show that both nuclides are associated with exploding stars, the ratios of plutonium-244 to iron-60 measured for both events are lower that would be expected if the nuclide were produced in supernovae alone.

This suggests that plutonium-244 and other r-process nuclides are made in astrophysical events additional to supernovae. Among the most popular ideas is that r-process nuclides are produced during neutron star mergers – such as the event detected in 2017 by gravitational-wave and conventional telescopes. Future multimessenger observations of such mergers could therefore provide crucial information about the origins of heavy elements.

The research is described in Science.

Scientific-journal publishers announce trans-inclusive name-change policies

Several major scientific-journal publishers have launched poli­cies that allow scientists to easily change their name on previous pub­lications – a move that transgender researchers have been campaign­ing to introduce for years. In March Elsevier introduced a name-change policy that covers its more than 2500 journals, while similar initiatives were recently announced by IOP Publishing, which publishes Phys­ics World, as well as the American Association for the Advancement of Science, the American Chemi­cal Society, the Royal Society of Chemistry, PLOS and Wiley. The American Physical Society (APS), meanwhile, is expected to release its own policy soon.

For transgender scientists who change their name, their old name can be personally painful and can also reveal them as being transgen­der. Previously, scientists had there­fore faced a decision between two bad choices – leaving older research off their publication record or risk­ing discrimination. According to the APS LGBT+ Climate in Physics sur­vey, published in 2016, transgender and non-binary physicists reported the highest levels of exclusionary behaviour, adverse climate and unsupportive policies. In the UK, meanwhile, almost a third of LGBT+ physical scientists have considered leaving their jobs because of discrim­ination and toxic workplace climates.

These policies com­ing into existence are finally going to start allowing trans scientists to bring more of their full selves and their full academic history to every place as they advance in their career

Elena Long

The recent journal policy changes have been driven by a group of transgender scholars belonging to the Name Change Policy Work­ing Group, who have worked with publishers and individual journals to develop the new policies. Irving Rettig, an inorganic chemist at Portland State University, began campaigning for the change after he transitioned and found himself caught within the bureaucracy of the university, scientific societies and publishers. “Your name is really important in science because it is the identifier that is linked to your merit as a scholar,” notes Rettig. “If your academic work is fractured across multiple identities…that would be incredibly detrimental, not just to your reputation but to your access to grants, your access to collaborations, your recognition and visibility.” Ret­tig adds that, for other transgender scientists, the issue could be com­pounded later in their careers, when they could have a larger publishing record to correct.

The next step

With IOP Publishing’s new name-change policy, authors can update their name and other identifying information such as pronouns, head­shots and e-mail addresses in all previous journal articles. The policy – which was specifically designed to address the issues transgender researchers face – can be used by authors who change their name for any reason, such as gender identity, marriage or religion. IOP Publish­ing’s policy is fully confidential and offers the option to change a name with or without a public notice. It also does not require proof of a name change, which can be a daunting and costly hurdle for scientists undergo­ing a legal name change.

“We wanted to ensure that authors could change their name on already published research without a cum­bersome process,” notes Kim Eggle­ton, integrity and inclusion manager at IOP Publishing. “A more inclusive and equitable publishing environ­ment is important to us, so we’re pleased to have made another step in the right direction.”

The working group hopes to now introduce a set of guidelines through the Committee on Publication Eth­ics – a non-profit organization that aims to define standards in the eth­ics of publishing – to set best prac­tice and support smaller journals that may not have the resources to develop their own policies. Publish­ers and researchers are also consid­ering changes to the wider publishing infrastructure. The next step could be to allow name changes on citation lists – a tricky feat given that articles can have hundreds of citations in other works.

More to do

Elena Long, a nuclear physicist from the University of New Hamp­shire who co-led the APS’s LGBT+ Climate in Physics survey, says the journal policy shift is important and much needed. “These policies com­ing into existence are finally going to start allowing trans scientists to bring more of their full selves and their full academic history to every place as they advance in their career, without having to out themselves and risking that extra level of discrimina­tion,” says Long.

Both Rettig and Long now hope that with the shifts in digital pub­lishing, systems can be redesigned in more fluid ways. For example, researchers have recommended tying publications to an author’s ORCID ID, which is a unique digital identifier, rather than a name. Most importantly, Long notes, there needs to be a move to recognize that diver­sity and inclusion is a fundamental part of science. “Physics has made a lot of progress, but a lot of that pro­gress continues to be made by the people who are having to struggle through it,” says Long. “There’s still a long way to go.”

Wireless device eases blood-pressure monitoring for children in intensive care

Imagine a paediatric intensive care unit (PICU) with no beeping monitors and without tubes, wires and probes lining every inch of each patient’s body – it perhaps seems implausible. While the reality of ICU care today involves wired life-support equipment, a research collaboration centred at Northwestern University envisions a future free of such a daunting environment.

The research team has developed a wireless, skin-interfaced device for non-invasive blood-pressure monitoring, introducing the device in Advanced Healthcare Materials.

Blood pressure monitoring

Tracking the blood pressure of children under intensive care is critical for monitoring their physiological well-being. Extreme low and high blood-pressure events can indicate life-threatening physiological changes such as limited cerebral blood flow.

The “gold standard” method for continuous blood-pressure monitoring of patients in critical care utilizes arterial lines (a-lines). Unfortunately, these catheters are invasive, painful to insert, and associated with risks of infection and limiting blood flow. A-lines are particularly difficult to administer to PICU patients. They are disproportionately sized when compared with children’s small arteries and highly restrictive in nature, often requiring the use of immobilizing accessories such as splints or braces.

The research team’s new device offers a non-invasive alternative to a-lines: a simple skin-interfaced tool for wireless monitoring of blood pressure. To achieve this, the device measures the patient’s heart rate and pulse arrival time (the time for a blood pulse to travel from the heart to the hand or foot), which are calibrated and converted into measurements of systolic and diastolic blood pressure. Doctors require both systolic and diastolic measurements to monitor cardiovascular function and the risk of blood-supply loss in the coronary arteries, a possibly life-threatening condition. The device can be interfaced with smart tablet applications for continuous monitoring and health alerts.

To analyse the proposed model for blood-pressure calibration, clinicians involved with the study collected data from 23 PICU patients. They determined that using a model that combines pulse arrival time and heart rate offered the best means to replicate a-line measurements.

Blood pressure monitor

The results indicated that the devices and analysis could meet US Food and Drug Administration specifications for measurements of diastolic blood pressure, while measurements of systolic blood pressure fell just short of the specifications. The researchers note, however, that indwelling a-lines themselves can be subject to over- and underestimation of blood pressure, and suggest that a larger trial would elucidate the validity of their results.

Specialized materials

Over the past decades, medical-device development has moved in the direction of soft devices that replicate the environment in which they are used. For their novel skin-interfaced monitor, the researchers selected materials that are compatible with the sensitive and fragile skin of PICU patients. They used a soft hydrogel to interface the chest device’s electrodes with the skin surface.

The team chose a robust but soft polymeric material to encapsulate the device, and demonstrated the mechanical stability of this elastomer over 70 days of shelf storage. Importantly, the researchers also autoclaved the encapsulant to establish its compatibility with the steam sterilization technique used ubiquitously in clinics. They were able to improve adhesion between the hydrogel and encapsulant materials by adding a surfactant called Silwet L-77 to the latter. The addition of just 0.2 wt% Silwet increased the peel force required to separate the two materials by 52%.

The researchers were able to use their device to study each of the 23 patients involved in the study, many of whom had respiratory failure, liver failure or airway abnormalities. Furthermore, the device was able to measure haemodynamic changes in response to administration of lorazepam, methadone, hydromorphone and dexamethasone, four common drugs given to intensive care patients. This points to the utility of such a monitor in guiding clinical drug management.

Looking ahead, the team hopes that these wireless smart devices could be employed outside the ICU, especially in outpatient ambulatory and in-home settings. Interfacing the device with a tablet could provide clinicians with continuous critical blood-pressure data remotely. Finally, the researchers underscore the need to expand clinical investigations and examine in detail the causes of interpatient variability in measurements.

Commissioning and independent validation of Ethos™/Halcyon™ machines: Best practice

Want to learn more on this subject?

This presentation has been submitted for approval by CAMPEP for 1 MPCEC hour. This course has been accredited by EBAMP as CPD event for Medical Physicists at EQF Level 7 and awarded 9 CPD credit points.

In this webinar, experience and best practice guidance for the independent commissioning validation and annual QA for Ethos™ and Halcyon™ machines is presented. Focus is on efficiency and accuracy in this process.

During this presentation you will learn about how to independently commission and validate the Varian® Ethos™ and Halcyon™ radiation therapy treatment machines based on real hands-on experience from the field.

The presenter, Mark DeWeese, is an experienced medical physicist who has been running a medical physics service company for more than 25 years. He has commissioned several of these machines over the years and gained a valuable insight to enable a precise, error-free and efficient workflow.

Want to learn more on this subject?

Mark DeWeese is a Medical Physicist, and President and Founder of Alyzen Medical Physics. As well as presenting for IBA Dosimetry, Mark works as a service provider for both Varian and Elekta. He has extensive experience with data collection and medical accelerator commissioning, and has commissioned five halcyon units to date.

 

 

Copyright © 2026 by IOP Publishing Ltd and individual contributors