Skip to main content

3D nano-vortices come into view

Vortices, domain walls and other magnetic phenomena behave in complex and dynamic ways, but limitations in imaging technology have so far kept researchers from observing them in more than two dimensions. Scientists in the UK and Switzerland have now found a way around this obstacle. According to physicist Claire Donnelly, who led the effort together with colleagues at the University of Cambridge, ETH Zurich and the Swiss Light Source, their new technique will give physicists a deeper understanding of how 3D magnetic materials work and how to harness them for future applications.

Changes in a material’s magnetization take place on the nanoscale in both time and space. Measuring these tiny, rapidly changing details is challenging, so scientists generally limit their studies to flat samples. While three-dimensional imaging has been recently demonstrated using techniques such as X-ray, neutron and electron tomography, the images obtained are static and do not show how the magnetic structures evolve over time.

One reason it is hard to capture magnetization changes in three dimensions is that the magnetization can point in any direction. This means that, when studying a 3D sample of magnetic material, the sample’s orientation with respect to its rotation axis needs to be changed partway through the measurement in order to measure all three spatial components (x,y,z) of the magnetization.

High-energy X-ray probe

That’s not easy to do, so Donnelly and colleagues instead used short, fast pulses of high-energy synchrotron X-rays to probe the magnetic state of a 3D magnetic structure (a microdisc of gadolinium cobalt, in this case) in different directions. This technique, which the team have dubbed time-resolved magnetic laminography, enabled them to measure how the magnetic state evolves in response to an alternating applied oscillating magnetic field, which the researchers synchronized to the frequency of the X-ray pulses.

Thanks to a specially-developed reconstruction algorithm, the team obtained a seven-dimensional dataset of measurements: three dimensions for the position of the magnetization state, three for its direction and one for time. The result is, in effect, a map of the magnetization dynamics for seven different time steps evenly spaced over 2 ns, with a temporal resolution of 70 picoseconds and a spatial resolution of 50 nanometres.

In some ways, Donnelly says the team’s approach is similar to computer tomography (CT), which is widely employed for 3D imaging in many areas, including CT scans in hospitals. Laminography involves measuring 2D projections of the magnetic structure for a number of different orientations of the sample. Importantly, the axis of rotation of the microdisc being measured is not perpendicular to the X-ray beam, so the team was able to access all three spatial components of the magnetization direction.

Towards a new generation of technological devices

The researchers visualized two main types of magnetization dynamics in their experiments: the 3D motion of magnetic vortices moving back and forth; and the precession of the magnetization vector. The movement of such structures had previously only been observed in two dimensions.

The new technique could help researchers better understand magnetic materials, Donnelly says. Well-known structures such as permanent magnets and inductive materials – both widely employed in sensing and energy production – could be studied with a view to improving their performance. Donnelly adds that there is also a growing interest in 3D magnetic nanostructures, which are predicted to have completely new properties and functionalities hard to achieve in their 2D counterparts. As well as providing insights into the physics of phenomena such as ultra-high domain wall velocities and magneto-chiral effects, these nanostructures might form the basis of a new generation of technological devices, boosting information transfer rates and data storage densities.

“This is a very exciting area of research,” she tells Physics World. “With our new technique, the magnetic materials community will now be able to measure and understand these systems – and hopefully exploit their properties.”

The research is described in Nature Nanotechnology.

‘Super-puff’ exoplanets put a ring on it

The apparent “puffiness” of some exoplanets could be due to Saturn-like rings, rather than envelopes of gas as was previously thought. That’s the view of astronomers Anthony Piro at the Carnegie Institution for Science and Shreyas Vissapragada at the California Institute of Technology, US, who came to this conclusion after simulating the transits of several “super-puff” exoplanets. Their analysis exposes two such exoplanets as likely candidates for having rings – a finding that could be confirmed after the upcoming launch of the James Webb Space Telescope (JWST).

As the list of known exoplanets expands, astronomers are identifying a growing number of bodies that appear to have remarkably large radii, given their relatively low masses. Nicknamed “super-puffs”, these seemingly ultra-low-density planets are typically anomalously cool, and are found in star systems with widely varying ages – meaning that most of them probably aren’t just young planets that haven’t yet fully formed.

To explain these enigmatic objects, some astronomers have proposed that they are surrounded by thick envelopes of gas. If this were the case, these envelopes could be expected to leave diverse absorption dips in the spectra of starlight passing through them. However, the super-puff spectra observed so far have been frustratingly featureless.

Not so puffy

Piro and Vissaptragada propose a different explanation. In their view, super-puffs aren’t actually puffy, but are instead surrounded by rings. These rings dim the light of the planets’ host stars as they pass between the star and observers on Earth, creating the illusion of exoplanets with far larger radii. They tested this theory by simulating observations of Saturn transiting the Sun, from the perspective of a distant star system. This revealed that Saturn would appear to be half as dense as it actually is if its rings weren’t accounted for.

The duo also simulated the transits of a variety of known super-puffs, aiming to determine whether the transit observations could have been distorted by rings. They found that their hypothesis was consistent with the transits of some exoplanets, but not all of them: given their proximity to their host stars, many of the bodies would need to have heavier, rocky rings instead of ice, which would limit the rings’ radii. In addition, the planets would need to spin fast enough to prevent warping in their rings – but this is often hindered by tidal locking with host stars.

These effects didn’t rule out every planet the duo considered. Of the exoplanets they analysed, Piro and Vissaptragada concluded that Kepler 87c and 177c have the best chance of appearing puffy due to rings. Confirming this will require more accurate photometric techniques than are currently available, but these improved measurements should be within reach of the long-awaited JWST, which is now scheduled for launch in March 2021. If such predictions are confirmed, they could greatly improve astronomers’ understanding of how planetary systems form and evolve.

Low-dose chest CT doesn’t appear to damage DNA

Immunofluorescent staining

Early detection and treatment of lung cancer are essential to reduce the mortality rate. According to the National Lung Screening Trial, screening high-risk patients with low-dose chest CT can reduce deaths from lung cancer compared with screening with chest X-rays. However, although low-dose CT delivers about one quarter the radiation dose of standard CT, its biologic effects remain unclear.

To investigate whether exposure to low-dose CT could increase the risk of radiation-induced cancers, a Japanese research team compared the number of DNA double-strand breaks and chromosome aberrations in peripheral blood lymphocytes following low-dose and standard-dose chest CT. They found that the low-dose CT scans used in lung cancer screening did not appear to damage human DNA (Radiology 10.1148/radiol.2020190389)

The study included 209 participants referred for chest CT studies, 107 of whom underwent low-dose CT and 102 who had standard-dose CT. The median effective dose was 1.5 mSv for low-dose CT and 5.0 mSv for standard-dose CT, with blood doses approximately 30% lower after low-dose CT than standard-dose CT. The researchers – from Hiroshima University and Fukushima Medical University – took peripheral blood samples from all participants immediately before and 15 minutes after the CT exam, and analysed blood lymphocytes in the samples.

To count the number of DNA double-strand breaks after CT, the researchers used immunofluorescent staining to visualize γ-H2AX, a marker of DNA double-strand breaks. They counted γ-H2AX foci in blood samples from 101 patients who underwent low-dose CT and 101 who had standard-dose CT. Before scanning, the median number of foci was similar in both groups. After CT, the median number of γ-H2AX foci showed a significant increase in the standard-dose group (from 0.11 to 0.16 foci per cell), while no significant increase was observed in low-dose CT group (from 0.15 to 0.17 foci per cell).

The team also quantified the number of chromosome aberrations, which reflect both the radiation damage and the accuracy of DNA repair, in 95 low-dose CT and 92 standard-dose CT patients. The median numbers of chromosome aberrations before low-dose and standard-dose CT were 6.7 and 7.6 per 1000 metaphases, respectively. After CT, the number of chromosome aberrations per 1000 metaphases was 7.2 in the low-dose group and 9.7 for the standard-dose group.

To confirm this finding that standard-dose CT resulted in greater DNA damage than low-dose CT, the researchers studied 63 individuals who underwent both low- and standard-dose CT. The number of cells collected enabled analysis of γ-H2AX and chromosome aberrations in 57 and 54 participants, respectively.

After low-dose CT, the number of γ-H2AX foci per cell increased from 0.11 to 0.16, while the number of chromosome aberrations per 1000 metaphases grew from 7.1 to 8.0. After standard-dose CT, the number of foci increased from 0.11 to 0.20, and the number of chromosome aberrations increased from 7.8 to 9.2.

“We could clearly detect the increase of DNA damage and chromosome aberrations after standard chest CT,” says senior author Satoshi Tashiro from Hiroshima University. “In contrast, even using these sensitive analyses, we could not detect the biological effects of low-dose CT scans. This suggests that application of low-dose CT for lung cancer screening is justified from a biological point of view.”

Harnessing the power of the oceans, careers tips, and a close look at terahertz technologies

In this episode of the Physics World Weekly podcast, we delve into the March 2020 edition of Physics World magazine, taking a look at a feature examining how scientists hope to harness power from the motion of ocean waves, plus the magazine’s new interview-based graduate careers advice section.

We also talk about some of the many applications of terahertz waves, including ghost imaging, speeding communications networks and searching for the origins of life on Earth.

Finally, we discuss how – in the light of increasing travel bans and restrictions on large gatherings – some conference organizers are looking to transition their scientific meetings from physical to virtual events.

 

The vagabond physicist

Einstein in Prague plaque

Not many people may know that the key experimental test of general relativity – the observation of the deflection of starlight by the Sun’s gravity during a solar eclipse – first occurred to Albert Einstein not in Zurich or Berlin, but in Prague. The city was then the capital of Bohemia, a region of the Austro-Hungarian Empire, prior to the post-war creation of Czechoslovakia in 1918. Einstein lived in Prague as a university professor of physics for a mere 15 or so months in 1911–1912, before moving back to Switzerland and then returning to Germany in 1914, where he published his general theory of relativity in 1915–1916.

In 1923 Einstein himself described the significance of his time in Bohemia, in a revealing foreword specially written for a Czech-language edition of Relativity: the Special and General Theory – his well-known booklet for the general reader, first published in German in 1916. Little-known even to the majority of Einstein scholars, the foreword (printed in Einstein’s German original followed by a Czech rendition by the publisher) has now been translated into English in Einstein in Bohemia – the deeply researched, wide-ranging and original book by historian Michael Gordin, which delves into Einstein’s relationship with Prague.

To quote Einstein: “I am happy that this small booklet…now appears in the national language of that country in which I found the necessary composure to gradually give a more definite form to the fundamental thoughts of the general relativity theory, which had been gathering already since 1908. In the quiet rooms of the Institute of Theoretical Physics of the German University in Prague on Vinicná ulice, I came in 1911 to the discovery that the equivalence principle required an observable degree of bending of light beams by the Sun, without knowing that more than a hundred years earlier a similar consequence had been drawn from Newtonian mechanics in connection with Newton’s emission theory of light. In Prague I also discovered the result, still not definitely established, of the redshift of spectral lines.”

Einstein’s tribute to the German University, though factually accurate, is an indication of his complex attitude towards Prague – both when he lived there with his first wife and children, and during the rest of his life until his death in the US in 1955. Gordin calls it “odd – one might almost say tone-deaf”, because it emphasizes the minority German-speaking community in Bohemia rather than the majority Czech-speaking community, which had long existed together in a state of tension. As a German-speaker appointed in Prague for his achievements in German physics, Einstein cultivated few contacts with the Czech community, and treated Prague with a degree of disdain. Indeed, Gordin’s chapter focusing on Einstein’s stay is titled “Anti-Prague”. Only later, after the rise of Adolf Hitler, the Nazi oppression of Czech Jews and the German occupation of Czechoslovakia in 1938–1939, did Einstein become more culturally sensitive to the Czech community.

Physics, and the history of science, appear throughout Gordin’s book, as do the many physicists who influenced and interacted with Einstein, including Ernst Mach, Max Abraham and, crucially, Philipp Frank. The latter, on Einstein’s recommendation, took over Einstein’s Prague university position in 1912. Frank later escaped from the Nazis in 1938, and went on to Harvard University in the US. In 1948 he published an influential English-language Einstein biography with a section on “Einstein at Prague”.

But the dominant theme of Einstein in Bohemia is unquestionably biographical, set against a cultural and political background – recalling Gordin’s excellent earlier study Scientific Babel: the Language of Science from the Fall of Latin to the Rise of English. In this latest volume, Gordin’s declared intention is to fill a significant gap in existing biographies of Einstein, rather than to dwell on the history of relativity. The book’s penultimate chapter deals entirely with Czech reactions to Einstein over the past century, including of course the politically contentious Soviet-dominated period from 1948 to 1989. For instance, Gordin discusses at length Frank’s much-quoted assertion that the Prague-born, German-speaking, Jewish writer Max Brod (best known for his friendship with Franz Kafka) based his portrayal of Johannes Kepler on his personal observations of Einstein in Prague, in his acclaimed historical novel, Tycho Brahe’s Path to God, published in 1915.

According to Frank, “Whether Brod did this consciously or unconsciously, it is certain that the figure of Kepler is so vividly portrayed that readers of the book who knew Einstein well recognized him as Kepler. When the famous German chemist W Nernst read this novel, he said to Einstein: ‘You are this man Kepler.’” Yet as Gordin observes, Brod was “horrified” by Frank’s claim – with its unsourced anecdote about Nernst and Einstein – and he worked to dispel this supposed link, both before and after Einstein’s death.

As for the idea that Einstein’s unconventional personal behaviour, symbolized throughout today’s world by his violin playing, wild hair, rumpled sweaters and lack of socks, was at heart “bohemian” – Gordin has little truck with it. As he reasonably points out, this iconic image belongs to the Einstein of later years, not to the younger physicist in Bohemia, who was generally groomed in the way conventionally expected of a German professor in 1911. But here, perhaps, Gordin misses a trick. He does not mention that the English word “bohemian” is derived from the French bohémien, meaning “gypsy” – a word that Einstein often used to describe himself.

  • 2020 Princeton University Press 384pp £24.00

Iron rain on exoplanet is driven by huge extremes in temperature

It could be raining molten iron on some exoplanets, according to David Ehrenreich at the University of Geneva and an international team of astronomers. The team discovered evidence for the metallic precipitation in atmospheric spectra of the giant, ultra-hot planet WASP-76b that is about 390 light-years from Earth. Their findings could provide new insights into the exotic chemistry that plays out in the atmospheres of such hot gas giants.

“Hot Jupiters” such as WASP-76b comprise a widely studied group of giant exoplanets that orbit close to their host stars, where they can experience daytime temperatures exceeding 2000 K. Such conditions are extreme enough to break down molecules in the planets’ atmospheres into individual atoms, which recombine back into molecules during the cooler nights. Clearly, this effect would produce extreme differences in atmospheric chemistry between both sides of these planets, but so far, astronomers have not confirmed this asymmetry through direct observations.

In the new study, Ehrenreich’s team has measured such a chemical gradient for the first time, using the ESPRESSO spectrograph at the Very Large Telescope (VLT) in Chile. With the instrument, the researchers could collect light from any combination of the VLT’s four 8-metre telescopes. This provided them with sufficiently high spectral resolution to check for differences in the absorption spectra produced by the atmosphere on the day and night sides of WASP-76b as it passed in front of its host star. In particular, they looked for a gap in the star’s spectrum corresponding to the characteristic wavelengths absorbed by atomized, or “neutral” iron.

Dusk and dawn

Because of the orientation of a transiting exoplanet, it is difficult to obtain its day- and night-time spectra. This is because (from our perspective) the day side is on the far side of the planet and the night side is always dark. However, Ehrenreich and colleagues could observe the atmosphere during dusk and dawn by focusing on the ring of starlight that passes straight through WASP-76b’s atmosphere. As Ehrenreich’s team had predicted, a neutral iron absorption line is visible on WASP-76b’s evening side, but not on its morning side.

The observation confirmed that neutral iron is indeed produced on WASP-76b during hottest part of the day – but becomes far less abundant over the course of the night. Overall, these observations suggest that neutral iron in the atmosphere of WASP-76b condenses during the night to form clouds of liquid droplets – the most stable form of recombined iron atoms. This could then cause molten iron to rain down on the exoplanet during the night, before being atomized again during the day. In the future, Ehrenreich’s team hope that the process could be understood in more detail through 3D global climate models.

The observation is described in Nature.

Dark-field microscopes made easy

Researchers at the Massachusetts Institute of Technology (MIT) have developed a new dark-field imaging technique that does not require specialized microscope components. The technique, dubbed “substrate luminescence-enabled dark-field imaging” (SLED), involves adding a mirrored substrate to the sample stage of a standard optical microscope, and its simplicity could make dark-field imaging more widely accessible.

Dark-field microscopy produces high-contrast images of samples such as blood cells, bacteria, algae and marine organisms that are often transparent and provide little to no light absorption contrast. In a typical method, light is shone onto the microscope’s sample stage at a steep angle with respect to the sample surface normal. Because these highly oblique angles are larger than the maximum light collection angle of the microscope’s objective lens, the microscope only collects light that is scattered by the sample onto a cone centred around the instrument’s optical axis. This scattered light creates an image of the sample’s features that are in bright contrast to the dark background.

To create such images, however, researchers need to fit standard optical microscopes with specialized filter cubes. They must also use dedicated objectives or condensers to shape the incident light cone, adding to the expense and bulk of the instrument.

A luminescent photonic surface

A team led by Matthias Kolle and Cecile Chazot of the Mechanical Engineering Department at MIT has now succeeded in integrating the components needed to create the cone of light for dark-field illumination into the surface on which the sample is placed. In the MIT instrument, this surface – the sample substrate – is made from a luminescent photonic material that emits light only at high angles as measured from the substrate surface’s normal. This light, Kolle explains, can only enter the microscope objective if it is scattered by the sample. Regions where there is nothing to scatter light (for instance, just water) will appear dark, while – as in conventional dark-field imaging – the features of small aquatic organisms, bacteria and other hard-to-image micron-sized objects appear bright. The substrate is thus “self-contained” and allows for dark-field contrast without the need for additional components.

How it works

The light emitted by the substrate is confined to high polar angle ranges thanks to the interplay between three different modules, Kolle says. The first module is a light source with a narrow spectral range (in this case, red). In the MIT experiments, this source consisted of core-shell semiconducting quantum dots (made from cadmium selenide/cadmium sulphide dispersed in a polymer matrix), but the researchers say that simple light-emitting diodes (LEDs) could work equally well.

This light source is positioned beneath a second module, comprising a spectrally selective mirror that allows only light of specific wavelengths to pass through, and only in specific directions. This mirror is made from alternating nanoscale layers of transparent materials with different refractive indices, which means they reflect incoming light at different angles.

“We tune this mirror so that it allows the red light generated by the quantum dots to pass through only at high angles (with respect to the surface normal),” explains Kolle. “If the light hits the mirror at the wrong angles, it bounces back and doesn’t escape from the substrate.”

Bragg reflector “gatekeeper”

In effect, this mirror – known as a Bragg reflector after the father-and-son team who established the underlying theory of how it functions – acts like a “gatekeeper”, Kolle says. It only permits light of a given colour to escape from the substrate at specific angles.

But what happens to the light that isn’t allowed to escape or light that is emitted by the quantum dots far from the Bragg reflector? For that, the researchers added a third module below the light source – a micropatterned mirror containing small wells around 4 microns across. This mirror bounces light back towards the top Bragg reflector and redirects it so that it has the chance to hit the reflector at an angle at which it can escape. The mirror is moulded from solid transparent epoxy coated with a reflective gold film. According to Kolle, its design was inspired by the wings of the Papilio butterfly, which get their iridescent colour from their micron-scale structure.

Sending light onto the sample at the right angles

The researchers, who report their work in Nature Photonics, say they have already used their technique to image individual bacterial cells and microorganisms in seawater. They add that the crucial photonic substrate could be mass-produced with existing techniques, and that it could be integrated into even the most simple and compact optical microscopes. It might also be incorporated into miniature dark-field imaging devices for applications in point-of-care medical diagnostics and bio-analytics. Kolle says his team is working on prototypes for such applications.

Computational model determines dose to blood during radiotherapy

MGH team

Radiation is known to be damaging to the immune system, with recent studies linking radiation-induced lymphopenia (loss of lymphocytes, the white blood cells associated with immune response) with poor survival after radiotherapy. To better understand how radiation treatments affect circulating lymphocytes, researchers at MGH/Harvard Medical School developed a 4D computational blood flow model to calculate the blood dose during radiotherapy (Phys. Med. Biol. 10.1088/1361-6560/ab6c41).

“The blood flow model aims to estimate the ionizing radiation dose to the circulating blood during a course of fractionated radiotherapy,” says first author Abdelkhalek Hammi. “This will improve our understanding of the suppressive effects of radiation on the patient’s immune system and the emergence of radiation-induced lymphopenia,”

Hammi and colleagues developed an intracranial blood flow model based on major cerebral vasculature extracted from patient MRI data, and extended with a network of generic brain vessels. For blood distribution outside the brain, they developed the model according to the reference human body model. The cerebral model includes 1050 vascular path lines and simulates more than 266,000 blood particles; the model for the entire body contains 22,178,000 particles.

Intracranial blood flow model

To determine the dose to the circulating blood, the team use Monte Carlo simulations to track the propagation of each individual blood particle through the brain and the time-dependent radiation fields. In a single treatment fraction, the blood dose approximates the dose to the circulating lymphocytes. For treatments spanning several weeks, however, Hammi notes that the effects of lymphocyte repopulation and exchange cannot be ignored, and will be investigated in future studies.

For clinical use, the model would use patient-specific input data. “The model itself will not change, but parameters such as treatment fields and the exact time structure of delivery will change,” Hammi explains. “In addition, the model parameters can be adjusted to account for the patient’s age, gender, blood pressure and other haemodynamic parameters that affect how the blood particles move between compartments.”

Comparing modalities

The researchers used their model to generate blood dose–volume histograms (DVHs) after intracranial intensity-modulated radiotherapy (IMRT) and passive scattering proton therapy. They created IMRT and proton plans for the same patient and target volume, simulated 30 treatment fractions (2 Gy per fraction, at 2 Gy/min) and calculated blood DVHs for each modality.

After the first fraction, the calculated mean dose to the blood pool was 0.002 Gy for proton therapy and 0.004 Gy for IMRT. After 30 fractions, the mean doses were 0.061 Gy for protons and 0.133 Gy for IMRT – an integral dose difference of more than 118%. The highest doses to 1% of blood after 30 fractions were 0.196 and 0.343 Gy, for protons and IMRT, respectively.

The volumes of blood receiving any dose after one fraction were 10.1% and 18.4%, for proton and photon treatments, respectively. Approximately 90% of the blood pool will have been irradiated after the 11th fraction of IMRT, but only after the 21st fraction of proton therapy.

With a view to optimizing treatment parameters, the team investigated impact of dose rate on the dose to circulating blood. While dose rate did not affect the mean dose to the blood pool, an increased dose rate (and thus shorter beam-on time) reduced the fraction of blood receiving a low dose, but increased the volume receiving high doses.

For IMRT, increased dose rates did not significantly decrease the fraction of irradiated blood after 30 fractions: 97.7% and 94.2% at 5 and 12 Gy/min, respectively. For proton therapy, on the other hand, higher dose rates reduced the irradiated blood at the end of treatment: increasing the dose rate (from 2 Gy/min) to 5 and 12 Gy/min reduced the fraction of blood receiving any dose from above 95% to 78.8% and 60%, respectively.

The team also quantified the fraction of blood receiving more than a threshold of 0.43 Gy (the dose thought to lower CD8 lymphocyte count by 10%), and saw that higher dose rates increased the volume of blood receiving this dose. Proton therapy, however, irradiated five times less volume to this threshold dose than IMRT.

Threshold dose

Finally, the researchers examined whether patient-specific variables such as cardiac output, gender and age affected the blood DVH. Higher cardiac output increased the blood volume receiving low doses and decreased the volume receiving high doses, mainly due to higher flow rate. They noted small differences between male and female patients (up to 4.3%), mainly due to differences in total blood volume and cardiac output. In younger patients, up to 10% more of the circulating blood received a low dose, independent of gender and treatment modality.

The researchers conclude that their blood flow model effectively estimates the dose to circulating blood during cranial radiation therapy. They showed that using proton therapy and increasing the dose rate can reduce the volume of irradiated blood.

“We are now working on end-to-end validation of the model based on measured lymphocyte depletion in patients treated with radiotherapy,” says Hammi. “We also want to extend the model to other organs and treatment sites.”

Metallic glasses bear up better under strain

Compressing metallic glasses could make them less prone to fracture, greatly increasing their potential for structural applications. So say researchers from the University of Cambridge in the UK and the Institute of Metal Research in Shenyang, China, who have succeeded in strain-hardening these metastable materials to a degree hitherto thought impossible

Metallic glasses are materials with the properties of both metals and glasses. They contain metallic bonds and are thus conducting, but their atoms are disordered like in a glass, not ordered as in a crystal. They are produced by heating certain substances to above their melting points and then quenching them in a way that prevents them from crystallizing.

While the exceptional strength of metallic glasses makes them promising materials for structural engineering applications, they have one major drawback: they can soften when deformed, which makes them brittle. This contrasts with normal polycrystalline metals and alloys, in which stress produces strain-hardening: under increased loading, plastic deformation in normal metals starts locally, but then spreads uniformly, allowing the material to “stretch”.

Strain softening and shear bands

This ductility, as it is known in metallurgy, is crucial for preventing catastrophic mechanical failure in structures such as steel beams for buildings. Its absence in metallic glasses is thus correspondingly disastrous for structural applications, says project team leader Lindsay Greer of Cambridge. “This is all rather frustrating,” he observes. “Many of the other properties of metallic glasses are highly attractive. For example, they can have a high toughness and show a ‘damage tolerance’ – the product of yield stress and toughness – that is higher than any other known material.”

Greer explains that unlike normal metals, metallic glasses experience non-uniform plastic deformation. At room temperature, deformation occurs via ultra-localized flow in so-called shear bands, leading to instant failure – zero ductility – when the glass is placed under tension. While no similar catastrophe occurs when the material is bent, the shear bands do give rise to unsightly surface markings, which are planes of weakness that evolve into cracks.

Strain-softening and shear bands are the Achilles’ heel of metallic glasses, Greer tells Physics World. Finding a way to strain-harden these materials has thus been a sort of “holy grail” since the earliest days of research on their mechanical properties, he adds.

A rejuvenated metallic glass

Greer and colleagues say they have now found a means of preventing shear banding in metallic glasses during plastic deformation. Their technique consists of compressing cylindrical samples of the glasses in a such a way that the stresses along one axis differ from the stresses in perpendicular directions (see image). This type of compression (known as a triaxial test) causes little plastic deformation, but it does make the central region of the cylinder softer. When the researchers cut out this central region and subjected it to stress and strain, they found that the pre-conditioned, or “rejuvenated”, specimen was much less brittle than the untreated material.

Greer explains that this apparently counter-intuitive result comes about because in the rejuvenated metallic glass, plastic deformation causes the atoms to settle into a denser – that is, atomically better packed – configuration. “This more relaxed state is naturally harder,” he says. “The secret is to simply start with a sufficiently unrelaxed glass that wants to settle/relax in in this way.”

Similar function to annealing

The team confirmed this explanation by using calorimetric tests to measure the energy content of the rejuvenated metallic glass before and after plastic deformation. These tests showed that the energy in the glass does indeed fall during plastic deformation. Thanks to electron diffraction observations, the researchers also calculated that the average interatomic spacing in the material decreases during plastic flow, proving that the atomic packing is denser.

It may seem strange to soften a metallic glass by compression and then study how it hardens upon subsequent plastic deformation, but Greer points out that a similar thing happens when a conventional polycrystalline metallic alloy is annealed, or heat-treated. In both cases, the resulting material is softer, but also tougher.

The major difference is that in a polycrystalline alloy, the annealed state has few defects, making this the low-energy, relaxed state of the system. In this case, plastic deformation generates defects, and the system’s energy increases. In contrast, the rejuvenated metallic glass starts out in a more high-energy state. Its energy then decreases during plastic deformation, producing a glass that exists in a more ordered, relaxed state and has, in effect, a lower defect density.

Wider applications

The new work, which is detailed in Nature, overthrows the idea that metallic glasses can only strain-soften upon plastic deformation, Greer says. The team’s strain-hardening technique is simple and has several practical advantages. For one, it works on metallic glasses of different compositions. It also works for samples that have previously been annealed and made brittle, effectively reversing earlier adverse treatments. Finally, it can be used to create samples of significant size, simply by starting with longer or larger-diameter cylinders.

“We hope that metallic glasses with improved properties could now be made relatively easily and thereby find much wider applications than is currently the case,” Greer says.

Baoan Sun of the Institute of Physics at the Chinese Academy of Sciences in Beijing, who was not involved in this work, agrees. “Finding strain hardening in these metallic glasses will certainly promote their real applications as structural materials,” he says.

Climate change is a ‘Pascal’s wager’: so how will you act?

Do you believe in climate change? I certainly do, though not everyone does – and some people don’t even care. But as businesses are slowly realizing, climate change is an existential threat. Whether it’s the pressure to cut carbon emissions or respond to damage to the local environment, climate change could put a company’s entire livelihood at risk. Climate change is, in other words, an example of a “Pascal’s wager”.

Pascal’s wager is a philosophical argument first presented by the French philosopher, mathematician and (let’s not forget) physicist Blaise Pascal (1623–1662). His logic was simple: God is, or God is not. Reason cannot decide between the two alternatives. The game is being played. You must wager (it’s not optional). Any rational person, Pascal argued in his posthumously published book Pensées (“Thoughts”), should live and act as though God exists regardless of belief.

If God doesn’t exist, you won’t have much to lose by believing in God or merely acting as if you do – you’ll only have forfeited a few pleasures. But if God does exist, then you stand to receive infinite gains (eternity in Heaven) and avoid infinite losses (eternity in Hell). As Pascal pointed out, our actions can have massive consequences, but our understanding of them is flawed.

Climate change is a modern-day Pascal’s wager, because so much is at stake.

Climate change is a modern-day Pascal’s wager, because so much is at stake. Doing nothing to tackle climate change could mean hell – rising sea levels, mass extinctions, droughts, ecosystem collapse, food shortages, famine, conflicts, war and possibly an uninhabitable planet for the next generation. But taking action could lead us to heaven – thriving as a species on a habitable plant. So even if you don’t believe in climate change, it makes sense to act rationally as though you do. The only way out is innovation and positive action.

Company action

Basically, we all need to start acting rationally, as the tech giant Microsoft is doing. In a truly inspirational new-year message released on 16 January, the company’s president Brad Smith promised that Microsoft would become carbon negative by 2030 and that, by 2050, it would have removed from the environment “all the carbon the company has emitted either directly or by electrical consumption since it was founded in 1975”.

The company also unveiled a new initiative to use Microsoft technology to help its suppliers and customers to reduce their own carbon footprints. In addition, it announced a $1bn “climate innovation fund” to accelerate the global development of technologies to reduce, capture and remove carbon. And the company promised, from the start of 2021, to make carbon reduction an “explicit aspect of our procurement processes for our supply chain”.

Now when the president of one of the very few companies with a trillion-dollar market capitalization makes such statements, the business world should sit up and take note. This is not some fluffy, feel-good PR stunt. Microsoft has drawn up a detailed plan, with milestones and a commitment to measure progress in its annual reports.

The other significant announcement in the business world was made on 15 January by BlackRock Inc – the world’s largest investment manager – in which it declared it would no longer invest in thermal coal. The company, which manages around $7 trillion of funds, also said it will “drop” any company directors (sell stock, vote against them etc.) who fail to act on financial risks from climate change.

More striking still, BlackRock’s chief executive Larry Fink wrote two letters: one to the heads of all companies it holds stock in and another to all of its clients. Published online, the letters reveal the firm’s environmental, social and governance priorities for the new decade and beyond. To say they’re significant doesn’t do them justice.

In his letter to chief executives, Fink said that climate change has become “a defining factor” in companies’ long-term prospects, pointing to events last September when millions of people took to the streets demanding action on climate change. “Many of them”, he wrote, “emphasized the significant and lasting impact that it will have on economic growth and prosperity – a risk that markets to date have been slower to reflect.” But, Fink continued, “awareness is rapidly changing, and we are on the edge of a fundamental reshaping of finance. The evidence on climate risk is compelling investors to reassess core assumptions about modern finance.”

Of course, businesses always face crises and challenges. But many that Fink has lived through over his 40-year career in finance – inflation spikes in the 1970s and early 1980s, the 1997 Asian currency crisis, the 2000 dot-com bubble, and the 2008 global financial crunch – were all essentially short-term. “Climate change is different,” he added. “Even if only a fraction of the projected impacts are realized, this is a much more structural, long-term crisis.”

Fink reckons that companies, investors and governments must now prepare for a “significant reallocation of capital”. Indeed, he claims that more and more of BlackRock’s clients around the world are looking to reallocate their capital into sustainable strategies. “If 10% of global investors do so – or even 5% – we will witness massive capital shifts. And this dynamic will accelerate as the next generation takes the helm of government and business.”

Young people, Fink pointed out, have been at the forefront of calling on institutions like BlackRock to address climate change, demanding more of companies and of governments, in both transparency and in action. “As trillions of dollars shift to millennials over the next few decades, as they become chief executives and chief information officers, as they become the policymakers and heads of state, they will further reshape the world’s approach to sustainability.”

Businesses, investors and fund managers are finally waking up and taking responsibility for their actions [on climate change].

My take-away message from these statements is that businesses, investors and fund managers are finally waking up and taking responsibility for their actions. So how will you wager on climate change in your business and personal lives? Will you act rationally? Or will you stick your head in the sand?

Copyright © 2025 by IOP Publishing Ltd and individual contributors