1 December 2020 was a dark day for Puerto Rico and the global astronomy community. The iconic Arecibo Observatory collapsed, with the radio telescope’s 900-tonne suspended platform crashing into the 305 m dish below. Warning signs had been there in the preceding months, but that did little to soften the shock felt by the astronomy community.
In this episode of the Physics World Stories podcast, Andrew Glester speaks with astronomers about the impact of this dramatic event. Abel Méndez, a planetary astrobiologist at the University of Puerto Rico, explains why the observatory was a beacon for Puerto Rican scientists and engineers. Mourning continues but Méndez and colleagues have already submitted a white paper to the National Science Foundation with plans for a new telescope array on the same site.
Constructed in the 1960s with US funding, Arecibo was originally used for military purposes. Its powerful radar was bounced off the ionosphere to better understand the nature of the Earth’s upper atmosphere and to look for signs of incoming Soviet missiles. Seth Shostack, senior astronomer at the SETI Institute, talks to Glester about Arecibo’s origins and how scientists soon saw the potential for bouncing Arecibo’s radar off astronomical objects such as asteroids.
Arecibo was the world’s largest radio dish until it was surpassed in 2016 by China’s FAST telescope. Arecibo’s size and tropical setting captured the public imagination and the observatory appeared in the films GoldenEye and Contact – the adaptation of the Carl Sagan novel. Contact’s lead protagonist is Ellie Arroway (played by Jodie Foster), partly based on SETI scientist Jill Tarter. Tarter joins the podcast recounting her experiences advising Jodie Foster on the character and role.
Scientists at the Delft University of Technology in the Netherlands have taken an important step towards a quantum Internet by connecting three qubits (nodes) in two different labs into a quantum network. Such quantum networks could be used for secure communication, for safer means of identification or even distributed quantum computing.
The group, led by Ronald Hanson, is no stranger to setting up quantum links. In 2015 members of the group performed the first loophole-free Bell inequality violation, successfully entangling two electron spin states over 1.3 kilometres in an experiment that finally put the lid on the 80-year-old Einstein-Podolski-Rosen dispute about the nature of entanglement. Although this two-node experiment could hardly be called a network, it laid the basis for the present work, which is described in a preprint on the arXiv repository.
From Alice to Charlie
The new network contains three communication quantum bits (qubits) in the form of nitrogen vacancy (NV) centres in diamond. These qubits are known, in traditional fashion, as Alice, Bob and Charlie, and the network that connects them has several unique aspects. Bob, for example, is associated with a separate memory qubit consisting of a nuclear spin in a carbon-13 atom. Another key aspect is that the system can signal when entanglement is established by detecting the photon that was used to create the entanglement.
The quantum network operates by first entangling the communication qubits of Alice and Bob. At Bob’s node, the entangled state is “cloned” to Bob’s memory qubit, leaving Bob’s communication qubit ready for further action. The next step is to set up a remote entanglement between the communication qubits of Bob and Charlie. This creates two links, each with a shared entangled state: one state shared between Alice and the memory qubit of Bob and the other shared between the communication qubits of Bob and Charlie. Bob, the central node, then performs an operation known as a Bell state measurement on its two qubits. This measurement teleports the state stored on Bob’s memory qubit to Charlie, which in turn establishes direct entanglement between Alice and Charlie.
Stabilizing the network
Expanding a quantum network from two nodes to three – and, in time, from three to many – is not as simple as just adding more links. The task is complicated by the fact that noise (which can destroy quantum information) and optical power levels vary greatly across the network.
The Delft team addresses this problem using a twofold stabilization scheme. The first, local element of the scheme focuses on stabilizing the interferometers used to generate entanglement between the communication qubit and the “flying” qubit at each node. The team does this by measuring the phase of the light that reflects off the diamond’s surface during the entanglement process and boosting this faint phase signal with a stronger laser beam. Polarization selection ensures that the reflected light does not reach the detectors that register entanglement, which would create false entanglement signals. The phase of the boosted light is measured with additional detectors, and the interferometers are stabilized by feeding the measured phase signal back to the piezo-controllers that position their mirrors.
The second, global part of the stabilization involves directing a portion of the laser light towards a separate interferometer used to generate entanglement between nodes. The interference is measured and the signal coupled to a fibre stretcher in one arm of the interferometer. By stretching the fibre, the phase of light in that arm can be controlled and the interferometer can be stabilized. This local and global stabilization scheme can be scaled to an arbitrary number of nodes, making it possible to expand the network.
Expansion strategies
The researchers suggest that their network could be expanded by increasing the number of qubits at a single network node, similar to the ten-qubit diamond NV-centre that a different Delft group created in 2019. They also say that the new network provides a platform for developing higher-level quantum network control layers that would make it possible to automate the network.
Jian-Wei Pan, a quantum communications expert at the University of Science and Technology of China who was not involved in the work, thinks its key achievement lies in realizing entanglement swapping between two elementary links of remotely entangled matter. “Such a process is essential in extending entanglement distances via quantum repeaters,” Pan says. However, he adds that the fidelities of the elementary entanglement, gate operations and storage – which together determine the scalability of the group’s approach – will need to be improved before a larger-scale network can be constructed.
Anders Sørensen from the Niels Bohr Institute in Denmark thinks that this is an important milestone in the quest for the quantum Internet. “This is the first time that anybody has managed to connect more than two processing nodes,” says Sørensen, who was also not part of the Delft team. “At the same time, they demonstrate some of the protocols, for example entanglement swapping, which we believe will play an important role in a full-scale quantum network.” While “formidable challenges” remain, especially when it comes to stretching the network over longer distances, Sørensen concludes that “this is a challenge that they are in a good position to tackle”.
Researchers at the quantum computing firm D-Wave Systems have shown that their quantum processor can simulate the behaviour of an “untwisting” quantum magnet much faster than a classical machine. Led by D-Wave’s director of performance research Andrew King, the team used the new low-noise quantum processor to show that the quantum speed-up increases for harder simulations. The result shows that even near-term quantum simulators could have a significant advantage over classical methods for practical problems such as designing new materials.
The D-Wave simulators are specialized quantum computers known as quantum annealers. To perform a simulation, the quantum bits, or qubits, in the annealer are initialized in a classical ground state and allowed to interact and evolve under conditions programmed to mimic a particular system. The final state of the qubits is then measured to reveal the desired information.
King explains that the quantum magnet they simulated experiences both quantum fluctuations (which lead to entanglement and tunnelling) and thermal fluctuations. These competing effects create exotic topological phase transitions in materials, which were the subject of the 2016 Nobel Prize in Physics.
Quantum annealers: Part of the D-Wave team in 2018. (Courtesy: D-Wave Systems)
The researchers used up to 1440 qubits to simulate their quantum magnet. In a study published in Nature Communications, they report that the quantum simulations were over three million times faster than the corresponding classical simulations based on quantum Monte Carlo algorithms.
Importantly, the experiment also showed that the speed of quantum simulations scaled better with the difficulty of the problem than the classical ones did. The quantum speed-up over classical methods was greater when the researchers simulated colder systems with larger quantum effects. The speed-up also increased when they simulated larger systems. Hence the quantum speed-ups are greatest for the hardest simulations, which can take classical algorithms extremely long times.
Advantage with a twist
The D-Wave team performed a similar quantum magnet simulation in 2018, but it was too fast to take accurate measurements of the system’s dynamics. To slow down the simulation, the researchers added a so-called topological obstruction to the quantum magnet – essentially, a “twist” in the magnet that takes time to unravel. Together with a new low-noise quantum processor, this addition enabled them to accurately measure the system’s dynamics.
“Topological obstructions can trap classical simulations that use quantum Monte Carlo algorithms, while a quantum annealer can circumvent the obstructions via tunnelling,” explains Daniel Lidar, who directs the Center for Quantum Information Science and Technology at the University of Southern California, US, and was not involved with the research. “This work has demonstrated a speed-up arising from this phenomenon, which is the first such demonstration of its kind. The result is very interesting and shows that quantum annealing is promising as a quantum simulation tool.”
In contrast with previous simulations comparing quantum and classical algorithms, King’s experiment directly relates to a useful problem. Quantum magnets are already being investigated for their potential applications in creating new materials. Quantum speed-ups could rapidly accelerate this research; however, the D-Wave team does not rule out the possibility of developing faster classical algorithms than those currently used. The team ultimately sees the most promising upcoming applications of quantum simulations to be a hybrid of quantum and classical methods. “This is where we expect to find near-term value for customers,” King says.
What prize do you get for not moving a single muscle all week? A trophy!
Alternatively, patients who have undergone volumetric muscle loss injuries may be interested in a novel technology recently reported in Advanced Healthcare Materials. The authors of the article – from the University of Nebraska, the University of Connecticut and Brigham and Women’s Hospital – developed a handheld printer to deliver hydrogel-based bioinks for treatment of such injuries.
When a large proportion of a muscle’s mass is lost, due to trauma, disease or surgery, the muscle begins to repair itself in such a manner that disabling scar tissue left behind causes loss of some degree of muscle function. Numerous approaches to assist the body’s natural regenerative processes have been developed, but all have limited efficacy and utility. For example, treatments such as direct cell delivery are constrained by the complex process of differentiating and harvesting myogenic cells, the cells that evolve into skeletal muscle. In settings where a direct and rapid response to volumetric muscle injuries is required – specifically in military trauma care – such approaches are inapplicable.
This latest study demonstrates a feasible approach for treating such injuries. The researchers developed a novel “Muscle Ink” for printing directly into large muscle-loss wounds. Within this specialized ink, vascular endothelial growth factor – key to inducing the angiogenesis and vascularization needed for muscular regeneration – is attached to 2D nanoclay discs, allowing for its continuous release over multiple weeks. The team incorporated these growth factor-bound discs into a biocompatible hydrogel that can adhere to wet tissue, such as the remaining musculature. The resulting scaffold has mechanical properties conducive to cellular regeneration and natural deformation during movement.
The gel is applied by loading it into a novel handheld printer, composed of a loadable syringe, a motor-controlled syringe pump and an ultraviolet light-emitting diode for in vivo hydrogel crosslinking. Clinicians can utilize the device for hot glue gun-like printing directly into wounds, creating a scaffold for cell regeneration and an environment that induces the same.
Left: the handheld printer. Right: murine model bioprinting. (Courtesy: Adv. Healthcare Mater. 10.1002/adhm.202002152)
The researchers evaluated the efficacy of the printing process in murine models. Animals underwent muscle mass injuries to the quadriceps. Some were left untreated, while others were treated with the Muscle Ink, with or without growth factor. After an eight-week recovery, the animals were tested for running function.
The mice treated with growth factor-loaded Muscle Ink had a maximum running speed insignificantly different from that of uninjured mice. This group was also able to run approximately twice as far as the untreated group or those with Muscle Ink but without growth factor. This test supports the premise that slow release of growth factor by the ink was responsible for the improvement in functional performance after volumetric loss muscle injury.
The research team envisions that, in addition to the tested skeletal muscle, this printing technique could be applied for treatment of other soft-tissue wounds. The handheld printer helps broaden the potential of in vivo delivery of hydrogel scaffolds for tissue therapy.
It’s questionable whether the romanticized archetype of the lone scientist has ever been the reality of science – even Isaac Newton didn’t work entirely in a vacuum, but often corresponded with Gottfried Leibniz. And certainly in the 21st century, working together is the name of the game.
I was reminded of this principle while attending the fourth Women in Quantum Summit, which took place on 9–11 March. It highlighted for me the importance not only of gender and background diversity in science, but also of collaboration in general.
Hosted by OneQuantum – an organization that brings together quantum-technology leaders from around the world – the summit included career-focused talks from quantum companies about their work as well as sessions on various aspects of the technology and where it is today.
Together, these events showcased the wide variety of opportunities in this exciting new field and allowed attendees to meet potential employers. While these summits do not exclude men, they differ from many such conferences in that most speakers are women, fostering an inclusive and inspiring environment for many who might often feel like outsiders in this field.
The event programme included “employers pitch” sessions, in which several quantum-technology companies, from start-ups like Pasqal to household names like Google, working on both hardware and software, introduced their work and the kinds of roles they have available. Most representatives were women working at the respective companies, and they were keen to emphasize their employers’ desire to hire more – not as some kind of box-ticking exercise, but because more diverse teams tend to perform better.
Unique to this most recent summit was a career fair following the employers pitch, where attendees could talk to representatives of companies in online booths, designed to recreate the networking opportunities presented by traditional in-person conferences. Though these online alternatives can seem a little strange at first, I think we’re all starting to get the hang of it now, and at a conference focused on quantum, of all fields, it felt aptly modern.
The second day of the summit was dedicated to the theme of quantum machine learning (QML) – an emerging field that aims to use quantum technology to advance machine learning. Three women scientists who work at Zapata Computing – a start-up founded in 2017 to develop quantum software for business – gave a fascinating talk about their work on QML. A poll at the beginning of this talk found that 83% of the audience, myself included, had no background in QML, so there was a lot to learn.
Hannah Sim, a quantum scientist, described her career path to date and how she had used her background in quantum physics to improve machine learning algorithms.
The second Zapata speaker, Kaitlin Gili, a quantum applications intern at Zapata, explained how quantum noise, typically thought to be a limitation in QML, can actually be harnessed as a useful feature when addressing certain tasks. For example, noisy quantum hardware can be applied to “generative models”, which are a promising candidate for AI. These models aim to find an unknown probability distribution, given many samples to train on, and produce an output of original data that is similar to the training set. The random noise that is input together with the training data turns out to be a useful part of the system.
The third speaker, Marta Mauri, who is a quantum software engineer at Zapata, touched on the importance of curiosity-driven research, arguing in favour of exploratory studies and against the belief that all attempts to enhance machine learning with quantum should be justified.
I was left with the feeling that there are many questions yet to be answered in this area, and even more yet to be asked, with room for people from diverse backgrounds and different perspectives to contribute. Like many emerging scientific fields, QML is now so broad and deep that a single scientist – or even a handful of scientists – cannot hope to make progress in isolation.
The sea of short-lived particles in the proton has a far higher abundance of anti-down quarks than anti-up quarks, new research has shown. An international team including Paul Reimer of Argonne National Laboratory in the US, discovered the asymmetry by firing a beam of protons at a hydrogen (proton) target at Fermilab. The results shed new light on highly complex interactions within the proton and could lead to a better understanding of the dynamics that unfold following high-energy proton collisions.
The original quark model of the proton is pleasingly simple: two up quarks and one down quark interact with each other by exchanging gluons, which bind the particles together through the strong nuclear force. However, physicists have now known for some time that the full picture is far more complex, with these three “valence” quarks alone only account for a fraction of the proton’s mass. Instead, the proton is now modelled as a roiling “sea” of many quarks, antiquarks and gluons that pop in and out of existence over very short time scales. This makes it very difficult to both calculate the internal properties of the proton and study them experimentally.
In the past, physicists have glimpsed these virtual particles in scattering experiments that measure their probability distributions in relation to the fraction of the total proton momentum they carry. Several theories had suggested that these seas should contain near-identical probability distributions of anti-up and anti-down quarks. Their reasoning is that the masses of both are roughly the same when compared to the total mass of the proton.
Muon and anti-muon pair
Reimer and colleagues tested this idea using the SeaQuest spectrometer at Fermilab. Protons are fired at a proton target. In some collisions, a valence quark in one proton will interact with a virtual antiquark of the same flavour in the other proton. This creates a photon that can transform into a muon and antimuon pair, which can then be detected.
Contrary to previous predictions, the researchers found that over a wide range of collision momenta, protons contain a far higher abundance of anti-down quarks than anti-up. The team hopes that these results will generate new interest in several proposed mechanisms for antimatter asymmetry within protons – which had fallen out of favour following previous experimental results.
The result could enable physicists to better interpret the results of high-energy proton collisions at CERN’s Large Hadron Collider. Accounting for antimatter asymmetry could lead to a better understanding of the particles produced by these collisions – including high-mass W and Z bosons which regulate the weak nuclear force.
Researchers have made ferroelectric domain wall diodes from structures etched on the surface of an insulating single crystal. The new devices, which are made from a material that is already widely employed in optoelectronics, can be erased, positioned and shaped using electric fields and might become fundamental elements in large-scale integrated circuits.
Domain walls are narrow (roughly 10-100 nm) boundaries between regions of a material where the dipole moments point “up” and neighbouring regions where they point “down”. At these boundaries, the dipole moments undergo a gradual transition to the opposite orientation rather than flipping abruptly.
Technologies that exploit these structures in ferromagnets have come along considerably over the last 15 years, making it possible to construct devices such as racetrack memories and circuits that operate using domain-wall logic.Spurred on by these advances, some researchers have turned their attention to analogous domain walls in ferroelectrics – that is, materials that have permanent electric dipole moments in the same way as their ferromagnetic counterparts have permanent magnetic dipole moments. Ferroelectric materials hold particular promise for applications because their dipole moments can be oriented using electric fields, which are much easier to create than the magnetic fields used to manipulate ferromagnets.
New group of two-dimensional conductors
Ferroelectric domain walls have several useful properties. When made into ferroelectric devices like diodes, all the domain walls, regardless of their polarity, align in the same direction when an electric field is applied. The ferroelectric domain walls can therefore be reversibly created, erased, positioned and shaped using positive or negative voltages.
Researchers in the School of Microelectronics at Fudan University, China, have now developed a new way of constructing such diodes by using ferroelectric mesa-like cells that form at the surface of an insulating crystal of lithium niobate (LiNbO3). This material is already commonly used in many optical and optoelectronic devices, including optical waveguides and piezoelectric sensors.
Led by Jun Jiang and An-Quan Jiang, the researchers used electron-beam lithography and dry etching processes to fabricate cells that were 60 nm high, 300 nm wide and 200 nm long on the surface of the LiNbO3. They then connected two left and right electrodes made from platinum (Pt) to opposite sides of a cell for subsequent measurements.
When they applied an electric field across the material via the electrodes, they observed that the domain within part of a cell reversed so that it pointed antiparallel to a domain at the bottom of the cell (which remained unswitched). This led to the formation of a conducting domain wall.
Interfacial “imprint field”
The team controlled the conducting domain wall’s current (which can be as high as 6 μA under an applied voltage of 4V) using two interfacial and unstable (volatile) domain walls that they connected to the two side Pt electrodes. The researchers explain that these interfacial domain walls then disappear, turning off the wall current path after the applied electric field has been removed or under a negative applied voltage.
“We ascribe the rectifying behaviour to the volatile domains within the interfacial layers,” team member Chao Wang tells Physics World. “As we remove the applied voltage or reduce it to below the device’s onset voltage (Von), the interfacial domains switch back into their previous orientations thanks to the existence of an interfacial ‘imprint field’. This field does not exist in the bottom domain, which, as we remember, is non-volatile.”
Reporting its work in Chinese Physics Letters, the Fudan University team says it will now be focusing on optimizing the properties of its devices – namely their Von, their on/off current ratio and their stability.
The idea that a record of the Earth’s magnetic past might be stored in objects made from fired clay dates back to the 16th century. William Gilbert, physician to Queen Elizabeth I, hypothesized in his work De Magnete that the Earth is a giant bar magnet and that clay bricks possess a magnetic memory. This phenomenon – known as “thermoremanent magnetization” – now forms the basis of a well-established method for dating archaeological sites that contain kilns, hearths, ovens or furnaces.
Indeed, the study of these burnt materials containing magnetic minerals, found at archaeological sites, is known as “archaeomagnetism”. One of the aims of this field is to help geophysicists gain a better understanding of local changes that have occurred in the Earth’s magnetic field over the past 3000 years. And if we know how the field changed in the past, we could also get insights into our magnetic future.
We already know that the Earth‘s magnetic field has lost around 10% of its intensity over the last 150 years. “The dipole strength has been steadily decreasing at a rate such that, should it continue, in 2000 years the magnetic field strength would be zero,” says geologist Rory Cottrell of the University of Rochester in the US. “The thought is that the planet is headed for a magnetic field reversal.”
Data collected from magnetic records in rocks indicate that over the last 76 million years there have been 170 reversals in which the north–south polarity of the field has completely switched. And it seems that another event is overdue: a full reversal has happened, on average, every 200,000 years over the last 10 million years, and the last one was 780,000 years ago.
Even small changes in the Earth’s magnetic field can have far-reaching repercussions for the planet’s surface. That’s because the magnetic field acts as a shield, repelling and trapping charged particles from the Sun that would otherwise cause electrical grid failures, navigational system malfunctions, and satellite breakdowns. Strong solar winds already cause problems from time to time, notably in 1989 when a billion-tonne cloud of solar plasma breached the Earth’s magnetic field. This created electrical currents in the ground that caused an electrical power blackout across the entire province of Quebec, Canada. If the field weakens further, we can expect more such events, triggering major disruptions.
1 Strange anomaly The South Atlantic Anomaly is an area of weak magnetic field stretching from South America to southern Africa, captured here by the European Space Agency’s Swarm satellite constellation. The anomaly has been growing and intensifying over the last 200 years, which could mean that a reversal of the Earth’s magnetic field is on its way. To see if that might happen, geophysicists have been examining archaeological artefacts to try to decipher how our magnetic history has changed over the previous 3000 years. (Courtesy: Division of Geomagnetism, DTU Space)
A particular cause for concern is the South Atlantic Anomaly, an area stretching from Chile in South America to Zimbabwe in Africa, where the magnetic field intensity is much lower than the global average (figure 1). The magnetic field strength across this region can reach as low as 25 μT, compared with up to 67 μΤ for other parts of the Earth’s surface. “It’s low enough that incoming radiation is no longer deflected and it interferes with satellite transmissions,” says Vincent Hare, a geophysicist and archaeological scientist at the University of Cape Town, South Africa. What’s more, the anomaly has been growing and intensifying over the last 200 years or so, which could be yet another signal that a field reversal is on its way.
Measuring archaeomagnetism
To understand whether this localized anomaly might be a sign of more significant changes, geophysicists have been examining our planet’s relatively recent magnetic history. Unfortunately, direct observations of the Earth’s magnetic field have only been collected since the 1850s, and even then only in some locations. While magnetic information contained in rocks goes back millions of years, researchers have focused their attention on archaeological artefacts to reconstruct our magnetic history over the last 3000 years or so.
Clays or other materials containing magnetic minerals lose any magnetic ordering when they are heated above 570 °C, but then become imprinted with the Earth’s ambient magnetic field when they are cooled
The field of archaeomagnetism relies on the fact that clays or other materials containing magnetic minerals – usually magnetite – lose any magnetic ordering when they are heated to above 570 °C (the Curie temperature). Indeed, the sample loses its net magnetization but when it cools back down below the Curie temperature, the particles remagnetize in the direction of the local magnetic field at that time. In this way, these archeological samples provide a snapshot of the Earth’s ambient magnetic field through different times and places in history. These samples can reveal both the intensity of the magnetic field and its direction, which is measured at any point on the Earth’s surface by the declination – the angle on the horizontal plane between magnetic north and true north (figure 2).
The first attempt to extract magnetic information from fired clay was made in the late 19th century, when the Italian scientist Giuseppe Folgheraiter calibrated a “geomagnetic secular variation curve” – a record of changes in both the declination and inclination of the Earth’s magnetic field, in a given location – for dating ancient pottery. The technique became more established in the 1970s, and can now deliver the same sort of precision as radiocarbon dating. “The job of the archaeomagnetist is to take samples and measure them to death,” says Hare.
But obtaining accurate measurements is far from simple. For a start, the residual magnetism in archaeological samples is tiny, with magnetic moments in the order of 10–3–10–5 Am2/kg, which is an order of magnitude lower than would be required to move a compass needle. Such small magnetic signals can only be detected with cryogenic magnetometers made from superconducting quantum interference devices (SQUIDs). Experiments must also be carried out in a “magnetic vacuum”, often using a Helmholtz cage that creates a uniform magnetic field to cancel out the Earth’s magnetic field.
Another complication is the compound nature of the raw magnetic signal. “The measurements are often a vector sum of the ancient magnetization you’re interested in, and also more recent overprints,” says Andy Biggin, a palaeomagnetist at the University of Liverpool in the UK. Those more recent or “secondary” magnetizations, he says, can often be successfully removed by incrementally heating the samples to temperatures approaching the Curie point, and then cooling them down after each heating step. “That gradually strips away the less stable magnetizations,” Biggin adds.
The accuracy of the measurements also depends on the magnetic structure of the sample, with smaller magnetic grains retaining their magnetization for longer. In magnetite, for example, magnetic grains measuring 50–80 nm can store their magnetic information for billions of years. Obviously, accurate readings of the field direction can only be taken if the object has not been disturbed since the field was imprinted. In other words, the materials should not, for example, have been significantly dried or water-logged since the sample was last heated to the Curie temperature. This requirement rules out a number of types of samples including those that may have been disturbed while being found. “Pottery kilns, or even [the remains of] cities that burned down, make an excellent archaeomagnetic record,” says Biggin.
It is even harder to determine the magnetic intensity from archaeological artefacts, since the present-day measurement also depends on the intrinsic ability of the sample to acquire thermoremanent magnetization. The easiest way to determine this intrinsic property, says Biggin, is to expose the sample to a known magnetic field and then measure the resulting magnetization. If, for example, the new magnetization is twice as strong as the ancient magnetization, the ancient magnetic field must have been half as strong as the controlled field used in the lab.
But working with ancient pieces of clay inevitably introduces problems, partly because the heating process often causes chemical changes or physical deterioration in the samples. “I have never encountered an analytical technique that is so difficult and takes so long,” says Hare. “You can get a month into your measurements, and then have them fail.” Despite these difficulties, geophysicists have already pieced together an accurate magnetic record for western Europe and large parts of the Middle East. In the UK, for example, archaeomagnetic dating now extends back to 1000 BCE, in some cases with accuracies within tens of years.
2 The Earth’s magnetic fielda The Earth’s magnetic field acts like a giant bar magnet, with the field lines encircling the planet and protecting it from the solar wind. (Courtesy: Shutterstock/Milagli) b The direction of the magnetic field at any given point on the Earth’s surface is defined by the declination – the angle on a horizontal plane between magnetic north and the North Pole. This angle varies across the Earth’s surface and changes over time, making it of interest to those involved in “archaeomagnetism”.
Searching for anomalies
This increasing amount of archaeomagnetic data suggests that the current South Atlantic Anomaly is not the only example of extreme local variations in the recent history of the Earth’s magnetic field. One area of focus is the Middle East, where a team of geologists and archaeologists has been studying the magnetization of ancient artefacts found in Israel. “We found very mysterious magnetic fields and surprisingly different than expected – super super strong,” says geophysicist Ron Shaar from the Hebrew University of Jerusalem. Strange behaviour was seen in the field direction as well as the intensity, with anomalies of more than 10° from the prevailing field direction of the time.
In 2016 the team published results obtained from pottery shards and cooking ovens found at Tel Megiddo and Tel Hazor, two sites in Israel that were occupied during the Iron Age more than 3000 years ago (figure 3). The data reveal the evolution of an extreme geomagnetic high between the 11th and 8th centuries BCE, culminating in two “archaeomagnetic jerks” or “geomagnetic spikes” where the field intensity shoots up and down again in less than a century (Earth and Planetary Science Letters442 173). The two spikes are centred at 732 BCE and 980 BCE, and each one has a field strength more than twice that of the current dipole field. This period has since become known as the Levantine Iron Age Anomaly.
One intriguing possibility is that the effects of these high fields might have been seen during biblical times. Passages from the book of Ezekiel, written 2600 years ago and chronicling a journey through Turkey, describe an immense cloud with flashing lightning surrounded by brilliant light. This depiction is thought to refer to the Aurora Borealis, normally only observed in the far north when charged particles collide at high speed with the stronger magnetic field in these regions. But a stronger magnetic field over the Middle East at that time could explain the lights seen by the prophet. Although the time period doesn’t match exactly with the spikes detected by the team, geophysicist Amotz Agnon – who founded the paleomagnetism lab in Jerusalem and initiated the Tel Megiddo project – points out that “with prophecies, you never know, maybe [this was] just a rumour from some oral tradition”.
In 2020 Shaar and his team published new data from an Iron Age excavation site in Jerusalem. They analysed 397 samples of burnt material from the floor of a building that they deduced was destroyed during the Babylonian conquest of the city, dated to August 586 BCE. Their results reveal similar high-field values, and also provide an exact anchor date for their measurements (PLOS ONE15 e0237029).
Data from further afield show just how far the anomaly stretched at that time. “We can trace its evolution, how it starts in the Middle East and migrates westward toward western Europe over a period of a few hundred years or so,” says Shaar. He and others have studied material from Turkey and Cyprus that show large swings in magnetic field direction from 1910 to 1850 BCE, with exceptionally high intensities around 700 BCE. Other data from Georgia showed high-field values in periods stretching from the 10th to 9th centuries BCE, as well as fast-field variations about 500 years later.
But this unusual magnetic behaviour is quite different from the anomaly now seen in the southern Atlantic, which is a localized region of weak magnetic field. To find examples of similar low-intensity anomalies from the past, Cottrell decided to search for clues in southern Africa. Working in collaboration with South African archaeologists, Cottrell identified suitable samples from sites near the Shashe and Limpopo rivers in northern South Africa, Mozambique, Botswana and Zimbabwe, dated from 425 to 1550 CE.
“The Iron Age of southern Africa is a good place to go,” says Hare, who was part of the study team. He explains that the local people would have built huts with clay floors, and regularly performed certain rituals to cleanse the community if there was a drought or a similar event. “One of those would be the burning of a hut floor, and that’s perfect for archaeomagnetism.”
So far only field-direction measurements have been published, but these early results already show interesting anomalous behaviour (Geophysical Research Letters 45 1361). “If we look at the magnetic field between today and 1500 CE, the rate of change was on the order of 0.06° per year, but between 1500 CE and 1350 CE, it was almost double that,” explains Cottrell. The team also identified an earlier period of relatively rapid change between the 6th and 7th centuries CE.
Cottrell believes that this variability is the most recent historic display of whatever phenomenon is causing the current South Atlantic Anomaly. This had previously been thought to be only a very recent event, but these new findings suggest that some parts of the world might to be prone to repeated changes in the magnetic field. To test this idea, Biggin looked further back in the geological record. He studied volcanic glasses formed 8–11.5 million years ago on the island of Saint Helena, right in the middle of the South Atlantic Ocean, and also found large variations in the direction of the magnetic field. This finding therefore supported the view that the Earth’s magnetic field has been unstable in this region for millions of years.
3 Hidden meaning The hill-top site of Tel Megiddo in Israel contains the remains of at least 20 cities, dating from about 5000 BCE to the 4th century BCE. Artefacts recovered from the site have revealed significant anomalies in the Earth’s magnetic field going back to the Iron Age. Among those to have worked on the project have been geophysicist Ron Shaar from the Hebrew University of Jerusalem and archaeologist Israel Finkelstein. (Courtesy: Megiddo Expedition)
Under the mantle
It is still unclear why certain regions experience these continuing anomalies, but geophysicists believe that the answer may lie in the interactions between the Earth’s mantle and its outer core, the 2889 km layer of molten iron-rich rock that is responsible for the Earth’s magnetic field. The magnetic field is generated by a dynamo process in which the Earth’s rotation, combined with convection currents in the molten core, creates rotating columns of liquid that generate the magnetic field. “When you move a conductor through a magnetic field, you induce electric currents and that makes more magnetic field – so it’s self-sustaining,” explains Biggin.
Anomalies in the magnetic field are thought to be associated with patches of magnetic field in the outer core that are stronger or weaker relative to the overall magnetic dipole, or that even point in the opposite direction. “As these flux patches move, they intensify and diminish, and cause very fast local changes,” says Biggin. Indeed, the current South Atlantic Anomaly seems to sit on top of one or more patches of opposing flux.
The Rochester team has proposed that these flux patches are associated with temperature or density changes deep in the Earth’s mantle. “Africa sits on top of a very special seismological feature in the interior of the Earth, called a large low-shear-velocity province,” says Hare. “It’s essentially a slightly heavier portion of the lowermost mantle of the Earth that sits on top of the outer core, and protrudes slightly into it.” This protrusion then perturbs the flow of the liquid outer core, causing flux patches that alter the magnetic field on the Earth’s surface.
Future clues
Overall, the data from archaeomagnetic studies have been reassuring for the future of the Earth’s magnetic field, since the anomalies we see today are clearly in line with past behaviour. “What we have observed over the past several hundred years is a very normal behaviour of the geomagnetic field,” says Shaar. “There is nothing to worry about based on comparison of today’s field with what we know about the ancient field.” That view is backed up by magnetic records obtained from rocks that were formed much further back in the Earth’s history, which show that the Earth’s magnetic field is now globally much stronger than in the 50 000 years leading up to the past five reversals.
Despite this general reassurance, there is still a lot to explore and understand about the anomalies in the Earth’s magnetic field. That means collecting more data on intensity variations over the last three millennia, but that’s a daunting task when there is still such a high failure rate in sample analysis. “Very few [geophysicists] focus on intensity, because the experiments drive them mad,” says Hare. “But it’s the key to this whole question.”
A new measurement technique being developed combines computed X-ray tomography with scanning magnetometry
One solution could be a new measurement technique being developed by Lennart de Groot, a geophysicist from Utrecht University in the Netherlands. Rather than simultaneously measuring the magnetism of millions of grains in one sample, de Groot combines computed X-ray tomography with scanning magnetometry to calculate the unique contribution of each grain (Geophysical Research Letters45 2995). More accurate results can be achieved, he says, because the technique requires only a small subset of the magnetic grains contained within each sample.
It will also be important to source samples from a wider variety of locations. Detailed magnetic profiles now exist for Europe and much of the Middle East, while data coverage in China is also improving. It remains difficult to source magnetic data from the southern hemisphere, but Cottrell is continuing her work in Africa, adding that “there has been a concerted effort by many researchers, particularly from South America, to collect this data”.
As more data become available, geophysicists are convinced that more anomalous behaviour will emerge in the Earth’s magnetic history. There is already some evidence of strong flux patches under Siberia, for example, and in the Southern Ocean near Australia. Shaar agrees that other anomalies will be found, and predicts that any new discoveries will be just as puzzling as those reported so far. “The world is huge and I suspect that we will find in the future that the geomagnetic field is nothing like we have measured in the past few hundreds of years,” he says. “It is an evolving thing, constantly changing, and there will be many surprises.”
New, large and highly radioactive particles have been identified from among the fallout of the 2011 Fukushima Daiichi nuclear disaster in Japan. An international team of researchers has characterized the particles using nuclear forensic techniques and their results shine further light on the nature of the accident while helping to inform clean-up and decommissioning efforts.
This year marks the tenth anniversary of the Fukushima Daiichi disaster, which occurred as a result of a powerful earthquake that struck off of Japan’s east coast, generating a tsunami that reached some 14 m high when it reached the nearby shoreline. Breaching sea defences, the water from the wave shut down emergency generators that were cooling the reactor cores. The result was a series of nuclear meltdowns and hydrogen explosions that released a large amount of radioactive material into the surrounding environment — including microparticles rich in radioactive caesium that reached as far Tokyo, 225 km away.
Recent studies have revealed that the fall-out from reactor unit 1 also included larger caesium-bearing particles, each greater than 300 micron in diameter, which have higher levels of activity in the order of 105 Bq per particle. These particles were found to have been deposited in a narrow zone stretching around 8 km north-northwest from the reactor site.
Surface soil samples
In their study, chemist and environmental scientist Satoshi Utsunomiya of Japan’s Kyushu University and colleagues have analyzed 31 of these particles, which were collected from surface soil taken from roadsides in radiation hotspots.
“[We] discovered a new type of radioactive particle 3.9 km north northwest of the Fukushima Daiichi Nuclear Power Plant, which has the highest caesium-134 and caesium-137 activity yet documented in Fukushima, 105–106 Bq per particle,” Utsunomiya says.
Alongside the record-breaking radioactivity seen in two of the particles (6.1×105 and 2.5×106 Bq, after correction to the date of the accident) the team also found that they had characteristic compositions and textures that differed from those previously seen in the reactor unit 1 fall-out.
Reactor building materials
A combination of techniques including synchrotron-based nano-focus X-ray analysis and transmission electron microscopy indicated that one of the particles was found to be an aggregate of smaller silicate nanoparticles each with a glass-like structure. This is thought to be the remnants of reactor building materials that were first damaged in the explosion and then picked up caesium that had been volatized from the reactor fuel.
The other particle had a glassy carbon core and a surface peppered with other microparticles of various compositions, which are thought to reflect a forensic snapshot of the particles that were airborne within the reactor unit 1 building at the moment of the hydrogen explosion and the physio-chemical phenomena they were subjected to.
“Owing to their large size, the health effects of the new particles are likely limited to external radiation hazards during static contact with skin,” explained Utsunomiya — with the two record-breaking particles thought too large to be inhaled into the respiratory tract.
Impact on wildlife
However, the researchers note that further work is needed to determine the impact on the wildlife living around the Fukushima Daiichi facility — such as, for example, filter feeding marine molluscs which have previously been found susceptible to DNA damage and necrosis on exposure to radioactive particles.
“The half-life of caesium-137 is around 30 years,” Utsunomiya continued, adding: “So, the activity in the newly found highly radioactive particles has not yet decayed significantly. As such, they will remain [radioactive] in the environment for many decades to come, and this type of particle could occasionally still be found in radiation hot spots.”
Nuclear material corrosion expert Claire Corkhill of the University of Sheffield – who was not involved in the study – says that the team have offered new insights into the events that unfurled during the accident. “Although the two particles selected [for analysis] were small, a mighty amount of chemical information was yielded,” she said, noting that some of the boron isotopes the researchers identified could only have come from the nuclear control rods damaged in the accident.
Ongoing clean-up
“This work is important to the ongoing clean-up at Fukushima, not only to the decontamination of the local area, but in defining a baseline understanding of radioactive contamination surrounding the power plant, to ensure that any materials accidentally released during the fuel retrieval operations can be quickly identified and removed,” she adds.
With this study complete, the researchers are now using the particles to better understand the conditions involved in the reactor meltdown, alongside looking quantify the distribution of this fallout across Fukushima, with a focus on identifying resulting radiation hot spots.
“If we can find and remove these particles, we can efficiently lower the radiation dose in the local environment,” Utsunomiya concluded.
A new electrostatic de-icing technique that exploits the natural charge separations in growing frost crystals has been developed by Jonathan Boreyko and colleagues at Virginia Tech in the US. The team used high-speed cameras to show how ice particles are broken off and propelled away from chilled surfaces when liquid water is suspended above them. Their discoveries could significantly improve our ability to remove stubborn frost layers from surfaces including aircraft and car windscreens.
Spontaneous charge separations in growing ice crystals have been studied for decades. For atmospheric scientists, the effect is key to understanding how clouds become charged during thunderstorms. However, one related effect, characteristic of frost formation, has remained largely unexplored until now.
When surfaces including glass and metal are chilled in humid air, ice crystals with branching, tree-like structures called dendrites can form. As these crystals grow, their upper branches will gradually become warmer, while their bases will remain cold. This generates higher concentrations of thermally activated negative ions, including hydronium and hydroxide in the branches, creating an excess of negative charge in those regions.
Jumping the gap
Borekyo’s team explored the idea that this charging effect could be exploited to develop better techniques for de-icing frosty surfaces. In their experiment, they prepared layers of dendrites on both glass and metal surfaces and suspended thin films of liquid water a few millimetres above them. Since water molecules are strongly polar, they became aligned in the presence of the negatively charged dendrite branches. This generates an electrostatic attraction between the branches and the liquid water; causing branches to dramatically break off and jump across the gap to stick to the water (see video).
Since no airflow or applied voltage is involved in the process, Borekyo and colleagues could non-invasively capture these jumps using a high-speed camera and compare their observations with numerical simulations. Their images showed strong agreement with the simulations; enabling them to precisely measure the electrostatic forces involved, and to determine their dependence on the temperature gradient across the dendrites.
The results could now provide fresh insights for atmospheric scientists studying how growing ice crystals drive electrification in thunderclouds. In addition, the research could lead to practical new electrostatic de-icing techniques; suitable for removing built-up frost from surfaces including aircraft, air conditioning units, and car windscreens on cold winter mornings.
Borekyo’s team now plan to scale up their technique in future research. By replacing water films with high voltage, actively charged electrodes, they could cause larger masses of ice, including entire dendrites, to be propelled away from surfaces.