Skip to main content

A superconducting surprise comes of age: the evolution and future of iron-based superconductors

In ancient history, the Bronze Age was followed by the Iron Age, as humans learned to make tools that were harder and more durable than those their ancestors had crafted from copper-based alloys. So when a new family of superconducting materials based on iron, rather than copper, was reported in early 2008, headline-writers were quick to announce the beginning of the “Iron Age” of superconductors.

The discovery was certainly a surprise. Iron-based materials are usually associated with magnetism, not superconductivity (the phenomenon where electrical current flows without resistance), although elemental iron can, under high pressure, become superconducting at very low temperatures. In addition, the chemical properties of the new iron-based superconductors (Fe SCs) were very different from those of the superconductors that contain copper, which are known collectively as cuprates. To physicists, this suggested that the mechanism behind superconductivity in Fe SCs must be different from the mechanism that produces superconducting behaviour in other materials.

Now, seven years later, we may be in a position to ask how Fe SCs are developing in comparison to the older members of the superconducting family – particularly the cuprates, which are sometimes called “high-temperature superconductors” because they become superconducting when cooled below a transition temperature, Tc, that in some cases exceeds 90 K. This is an important asset because their Tc is above the boiling temperature of liquid nitrogen, 77 K, which means that cuprates can be made to superconduct in systems that use liquid nitrogen rather than more expensive liquid helium as a coolant. Indeed, cuprate superconductors already have several applications (including superconducting quantum interference devices, or SQUIDs, which can detect extremely minute magnetic fields) and they are beginning to be applied on larger scales as well – for example in superconducting leads already being used in CERN’s Large Hadron Collider. The question we want to ask is: how will Fe SCs stack up against their increasingly useful predecessors?

Physics and chemistry together

Superconductivity is so fascinating and puzzling a phenomenon that it took almost 50 years from its discovery in the early 20th century until a theory that explains its mechanism was formulated. This theory, which is called “BCS” after its discoverers John Bardeen, Leon Cooper and Robert Schrieffer, is now firmly established, having celebrated its half-centenary a few years ago. The discovery of high-temperature cuprate superconductivity in 1986 was a kind of second revolution in the history of superconductivity, and one lesson we have learned from it is that physics and chemistry have to be “married”. In other words, quantum chemistry, like it or not, lies at the heart of the high-Tc cuprates’ crystal and electronic structures. So if we want to understand the mechanism for superconductivity in these materials, or to explore the design of new ones, we need to understand their chemistry.

1 Iron versus copper

Figure 1 showing the structures of the different superconductorsComparing the (a) typical crystal structure, (b) Fermi surface and (c) superconducting gap in momentum space of the iron-based (left) and cuprate (right) superconductors. 

In the cuprates, superconducting currents, or “supercurrents”, flow along the copper-oxide planar crystal structures shown in figure 1a. Fe SCs also have a planar structure, but in their case, the key, current-carrying plane comprises compounds of iron and, typically, elements found in column 15 of the periodic table, such as arsenic. Elements in this column are called “pnictogens”, which is why Fe SCs are sometimes called iron-pnictides. While cuprates have some chemical variability, the variety seen in the Fe SCs is even greater. In the former, there are basically only two “families” of compounds, represented by La2CuO4 and YBa2Cu3O7. In these compounds, carriers of supercurrents can be prepared by, for example, reducing the number of oxygen atoms (a process called doping). In the Fe SCs, by comparison, there are several different families. The first material discovered (by one of us, HH) was a four-element compound, LaFeAsO, which is called “1111” in the jargon. Since then, it has been joined by several other families, from “122” down to “11” (figure 2).

2 The iron families

Figure 2 Crystal structures of iron-based superconductors

Crystal structures of four “families” of iron-based superconductors, showing the positions of iron atoms (brown), pnictogen atoms (green, labelled Pn) and chalcogen atoms (green, labelled Ch) in each family. The positions of alkali atoms (A), alkaline-earth atoms (Ae) and other elements present in the “111”, “122” and “1111” families are also shown.

The chemical diversity of the Fe SCs matters because their Tc and the way in which superconductivity emerges both depend not only on which family a superconductor belongs to, but also on the chemical compositions even within one family. This may sound like too complex chemistry, but on the other hand, those of us who study superconductivity have now had more than a quarter of a century to get used to complicated compounds such as Sr14–xCaxCu24O41 (which is known, jokingly, as the “telephone directory cuprate”). This cuprate potentially harbours, due to its peculiar crystal structure, some interesting physics. So the lesson is: do not be afraid of chemistry.

Another lesson that applies to both iron-based and cuprate superconductors is that it pays to look out for unexpected things. In fact, when HH discovered the first iron-pnictide, he was not actually aiming to find new superconductors at all. Instead, in 2005 his group at the Tokyo Institute of Technology was exploring magnetic semiconductors to build on their earlier discovery of transparent p-type conductors in compounds with the chemical formula LaCuChO. In this compound, the symbol “Ch” is either sulphur or selenium, and copper is in its +1 oxidation state. HH then moved on to a slightly different system, LaTMPnO, where “TM” is a transition metal with an unfilled orbital in the 3d electron shell (such as iron) and “Pn” is a pnictide (either phosphorus or arsenic). This system seemed interesting because it has the same crystal structure as LaCuChO, yet the transition metal in it is in a +2 oxidation state. This implies a tendency towards magnetism, where each transition-metal atom has an open-shell electronic configuration and the total electron spin tends to be non-zero.

The surprise came in 2006 when not only magnetism but also superconductivity emerged in LaFePO, though with Tc = 4 K (as discovered by HH in collaboration with Yoichi Kamihara and colleagues), and in LaNiPO, where Tc= 3 K. The big breakthrough came in early 2008 when it was reported that an arsenic compound doped with fluorine, LaFeAsO1–xFx, was found to have a significantly higher Tc of 26 K. Soon afterwards, a group of researchers in China found that Tc in this compound can be raised to 55 K when lanthanum is replaced with samarium.

Theorists catch up

Right after the discovery of the “1111” superconductivity, one of us (HA), in collaboration with Kazuhiko Kuroki and others, constructed a theory to explain how superconductivity operates in Fe SCs. Another group, including Igor Mazin, David Singh and colleagues, independently developed a similar theory at the same time. We started with an observation from the periodic table of the elements. Each transition-metal atom has electrons in its d-orbitals, which have an angular momentum of 2ℏ. As you move along the rows in this part of the periodic table, the number of filled or part-filled d-orbitals increases up to a maximum of 10 (recall that there can be up to two electrons per orbital, one spin up and one spin down, as dictated by the Pauli exclusion principle). Copper, located at the far right of the transition-metal part of the periodic table, has nine electrons in its five 3d orbitals in the cuprates where copper has +2 ionic state, which leaves one orbital unfilled – and thus only one of its d-orbitals is left chemically active.

For copper atoms in the cuprate superconductors, it is this single unfilled orbital that carries the supercurrent. By contrast, iron sits around the middle of the periodic table and thus has more than one chemically active d-orbital (typically three). This implies that iron has a very “open shell” configuration, with only about half of its five d-orbitals filled. Hence, the supercurrent in iron-based superconductors must be carried by electrons in multiple d-orbitals.

To understand what these superconducting electrons are doing (and thus better understand how the cuprates and Fe SCs differ from each other), physicists employ a concept called a Fermi surface. Quantum mechanically speaking, the electrons in a metal are described by wavefunctions in a given crystal, with up to two electrons for each wavefunction (again due to Pauli’s exclusion principle). These wavefunctions will be accommodated in orbitals up to a certain highest energy, called the Fermi energy. In momentum space, the equi-energy contour forms what is called a Fermi surface, and the shape of this surface can tell us a lot about superconducting behaviour. In cuprate superconductors, for example, the Fermi surface is very simple (and simply connected as well), due to the single-orbital character of its electron configuration (figure 1b). For the Fe SCs, though, the Fermi surface is a composite of multiple surfaces, due to its multi-orbital character. Consequently, the Fermi surface of iron-based superconductors comprises multiple “pockets”.

If you open any textbook of condensed-matter physics, you will read that superconductivity (as revealed by BCS theory) arises when electrons around the Fermi energy pair up. The formation of these “Cooper pairs” is possible because a coupling between electrons and phonons (the quantum-mechanical version of vibrations of a crystal lattice) produces a slight attraction between electrons, on top of the repulsion they experience due to having the same electric charge. The superconducting BCS state, composed of Cooper pairs, has a lower energy than unpaired electrons, and this energy gain produces a gap (called the BCS gap) just above the Fermi energy. More importantly, the BCS state harbours a spontaneous breaking of a symmetry (gauge symmetry, to be precise), which causes current to flow without resistance.

That explanation works well for conventional superconductors, but the discovery of high-Tc superconductivity in the cuprates made physicists realize that superconductivity can also arise from electron–electron repulsion per se, which is strong for transition elements. In this case, the pairing is mediated by fluctuations in spin structure rather than the lattice vibration. Another essential difference is that, while pairs of electrons in conventional superconductors have a relative angular momentum of zero (which is dubbed an “s-wave pairing”), in the cuprates we have pairs of electrons circulating each other with a non-zero angular momentum of 2ℏ (a “d-wave pairing”). Under these circumstances, the BCS gap – which is usually entirely positive – changes its sign. Namely, if you imagine walking along the Fermi surface for cuprate superconductors, you will see the gap changes its sign twice (figure 1c).

This is interesting, but the BCS gap has to vanish across the sign-changing points, or nodes, in a continuous fashion, which makes the overall magnitude of the BCS gap smaller, ending up with rather low Tc. In the cuprates, we cannot evade this: the nodes have to exist, because the nodes must intersect the simply connected Fermi surface at some point. For Fermi surfaces that are multiply connected, on the other hand, the sign-changing lines could lie in-between the pockets, thus giving us one pocket with an entirely positive BCS gap while the other pocket has an entirely negative gap. This clever pairing is called sign-reversing s-wave, or s±, and it seems to be happening in Fe SCs of the “1111” type. In fact, there is now a body of experimental results to support this theory.

3 Uemura plot

Figure 3 Graph of transition temperature for various superconducting materials plotted against their Fermi temperature

Transition temperature Tc for various superconducting materials plotted against their Fermi temperature TF (estimated from superfluid densities) on a double-logarithmic scale. Iron-based superconductors (Fe SCs) are here represented by compounds BaFe2(As1–xPx)2, as its phosphorus content x increases (red circles) or decreases (red squares) from x = 0.30.  The cuprates are represented by green diamonds, green squares and green triangles (showing three different families). Both Fe SCs and cuprates are found near the top perimeter of the plot. Also plotted here are other classes of unconventional superconductors, including organic superconductors (purple triangles), a cobalt compound (green cross), the so-called heavy-fermion compounds that contain uranium atoms (black stars) and compounds containing carbon and alkali atoms (blue crosses) as well as the conventional low-Tc superconductors such as elemental Nb (inverted blue triangles) for comparison. We have also included, as guides, a blue line representing TF and a dashed line representing transition temperature, TB, for the Bose–Einstein condensation that a system with a given TF would have if the Cooper pairs were pure bosons.

Since Tc is in general governed by the underlying electron energy scale, it is useful to look not only at the absolute value of Tc, but also at the relationship between Tc and the Fermi temperature TF (which is just the Fermi energy translated into temperature). If we follow Yasutomo Uemura and plot the experimental Tc for known superconductors against TF, we can see that Tc for known superconductors basically scales with TF (figure 3). In addition, we can also see that the Fe SCs are situated around the topmost perimeter of the plot – an indication that Fe SCs do have high Tc in this sense.

Complex crystals

So far, we have assumed that cuprate superconductors and Fe SCs exist as planar crystals, with Fermi surfaces calculated from the x and y components of the momentum of electrons in the crystals. Indeed, several of the newer superconductors (including cuprates and Fe SCs, but also cobaltate, hafnium and zirconium compounds) tend to have layered crystal structures. In fact, there have been some general theoretical suggestions (from Philippe Monthoux and Ryotaro Arita with their respective collaborators) that layered systems give rise to higher Tc than ordinary materials for superconductivity that arises from electron–electron repulsion.

4 Chemical diversity in doping

Figure 4 phase diagram

This phase diagram of materials based on the superconductor BaFe2As2 illustrates how the superconducting regions (cross-hatched) emerge from the material’s various chemical make-up. The purple region shows variations made by replacing some of the iron with cobalt (electron doping), with x indicating the amount of cobalt present. The green region shows how replacing some barium atoms with potassium (hole doping) affects Tc. Finally, the orange region indicates how Tc can be altered by isovalent substitution, in which arsenic atoms are replaced with phosphorus. Experimental Tc is shown with red dots, where different superconducting regions may have different nodal structure in the pairing. The antiferromagnetic transition temperature, TN, is shown by blue dots.

For Fe SCs, though, there is the additional complication that they come in different “flavours”, with the different crystal structures we described earlier, and the type of electron pair that is formed depends on their chemical composition. Figure 4 shows that modifying “122” compounds with hole doping, electron doping or isovalent substitution (meaning arsenic atoms have been replaced with another element in the group) produces a variety of phases, including “nodeless” (each pocket in the Fermi surface is fully gapped) and “nodal” pairing (sign changes occur within a Fermi surface). Both theorists and experimentalists are trying to understand such changes in terms of the intricate Fermi surfaces arising from multi-orbital physics. The correlation of Tc with the iron-pnictogen bond angle and height has also been interpreted in such terms. Not only is more than one iron orbital relevant, but the way in which these orbitals are involved can also fluctuate in time and space, and the crystal structure itself slightly changes as we cool the sample. The effects of these phenomena on the physical properties of Fe SCs, for example a material-dependent realization of s++-wave pairing where the pockets have the same sign in the BCS gap, are now being actively examined.

One recent breakthrough occurred when HH and co-workers made an iron-based superconductor with a lot of hydrogen doping. This modification produced a “double-dome” pattern in the behaviour of Tc, which Kuroki and colleagues think is due to subtle changes in the electronic structure in the multi-orbital system.

Another intriguing possibility, again related to the multi-orbital character of Fe SCs, is that they could be used as a “playground” for investigating violations of time-reversal symmetry. Normally, transitions between different phases of matter look the same if we take a “video” of them happening and play it back in reverse (although there are important exceptions in, for example, magnets, where the aligned spins will point in the opposite direction when time is reversed). Superconductors usually obey this time-reversal symmetry, but in principle, time-reversal broken versions are possible. So far, this time-reversal broken superconductivity is rare, occurring in, for example, a compound of ruthenium (Sr2RuO4), but the subtle balance and competition arising from multiple orbits and pocketed Fermi surfaces in Fe SCs may suggest that it could also be achieved there.

As for Tc, its maximum value in Fe SCs is still only moderate (< 77 K) when compared with the cuprates. A discovery of new materials with higher Tc would be highly desirable both for fundamental theory and for applications (see box “Putting iron-based superconductors to work”). A distinct feature of the iron-based superconductors, though, is the large diversity in their parent materials (they typically contain two other elements in addition to iron), which gives materials scientists a lot to play around with. Fe SCs also seem to respond very sensitively to modifications caused by other factors such as pressure and the substrate on which they are made. For instance, the “11” compound FeSe has the simplest crystal structure in the iron-based families, and a relatively low (8 K) Tc at atmospheric pressure, but this is drastically enhanced to 37 K under a high pressure of 9 GPa. Another avenue for increasing Tc is to use epitaxy: when FeSe is grown as an atomic mono­layer deposited on SrTiO3:Nd substrates, studies using scanning tunnelling spectroscopy have found that it has an energy gap of about 20 meV. If this gap originates from superconductivity, then its Tc would lie above 77 K, although this will have to be confirmed by measurement of the Meissner effect – the expulsion of magnetic field that is a sure indication of superconductivity.

Putting iron-based superconductors to work

Hideo Hosono, Yoichi Kamihara, Hideo Aoki and Kazuhiko Kuroki

In the seven years that have passed since their discovery, some applications of iron-based superconductors (Fe SCs) have already been demonstrated. Their fabrication (via the deposition of thin films on top of a crystal substrate) has been extensively studied, especially for BaFe2As2, a material with Tc=25K. Researchers have also succeeded in using Fe SCs to fabricate Josephson junctions (two superconductors coupled by a weak link, such as very thin non-superconducting material) and superconducting quantum interference devices (SQUIDs).

Perhaps the most important application of superconductivity, in general, is the generation of strong magnetic fields with superconducting wires. In this application, it is important that the supercurrent should not depend much on the direction of current flow. In addition, the superconducting material must be able to tolerate very intense currents and magnetic fields; superconductivity is known to be destroyed by strong currents above a critical current density, Jc, and by magnetic fields above an upper critical field, Hc2. It is therefore imperative to maximize their values as well, not just Tc on its own.

Early studies found that Fe SCs have a high Hc2, and also that their crystal structure in the superconducting phase is favourable for wire applications. More specifically, their crystals look the same after being rotated by 90° (tetragonal symmetry), so the crystals that make up a wire must be aligned along just two axes. As for the maximum critical current density, this, for thin films, has now reached J= 0.5 × 106 A/cm2 at 4 K and magnetic field of 10 T for systems with an improved crystal growth technique. Jc has recently been increased in BaFe2(As1–xPx)2 (max Tc = 31 K) to 1.1 × 10A/cm2 (or 7 × 106 at 4 K in zero magnetic field).

If Fe SCs are to be used as wires, we need to ensure that Jc is not easily affected by any misalignments between adjacent crystallites, which can be characterized by the critical grain-boundary angle beyond which Jc starts to drop rapidly. The critical angle has been determined with epitaxial thin films deposited on twinned single crystalline substrates that are artificially joined with various titling angles, and it turns out to be 9–10° – almost twice that of the cuprates. This finding is encouraging for wire fabrication, because superconducting wires have many polycrystals, where tolerance of large tilting angles between neighbouring crystallites makes it easier to fabricate wires with higher Jc.

The maximum Jc has evolved in iron-based superconducting wires fabricated by the conventional “powder-in-tube” (PIT) method, in which a metal pipe filled with superconducting powder is shaped into a wire mechanically. Intense efforts by research groups in the US, Japan and China during the last two years have brought the maximum value above the level required for practical applications, which is 10A/cmat 4 K and 10 T. The “122” compounds, in particular, occupy a unique position in applicable regions in the temperature-magnetic field diagram; the fact that they (like conventional metallic superconductors, but unlike the cuprates) can be fabricated by the PIT method gives them an advantage as well. These compounds have supercurrents that depend little on the direction of the current, which also favours applications. We can thus expect high-field applications below 30 K. All of these achievements have been made in flat wires, so one of the next technical hurdles will be to realize them in round wires.

Outlook

In 2011 Physics World published a special issue reviewing the first 100 years of superconductivity. Iron-based superconductors have been around for only a small part of this long history, and there is still a lot to be done. The challenge for both theorists and experimentalists now is to make the best of the versatility of these multi-orbital materials, with all the many “actors” (spin, charge, orbital and lattice degrees of freedom) that influence their properties. Applications are coming into sight, although some challenges will have to be overcome before these materials can prove their worth outside the laboratory; above all, their Tc needs to be higher.

In a broader context, Fe SCs are useful for the growing field of “functional materials design”. We can, for instance, ask ourselves if it is possible to replace iron with other elements. We can explore how superconductors made from transition-metal compounds compare to light-element ones such as carbon-based superconductors. We can also try to apply the hydrogen doping mentioned above to entirely different classes of materials. Developments in iron-based superconductors may even give us new ways to exploit the feedback between solid-state systems and cold atoms trapped in optical lattices, which are being established as a “quantum simulator” of the former. The next few years of the “Iron Age” should reveal the answers.

Quantum-inspired art

“Other artists use oil paints or watercolour as media,” said Eric Heller, a professor of chemistry and physics at Harvard University who creates digital art based on his research. “I use quantum phenomena like resonance and branch flow.” He pointed to Random Sphere I – a painting of what looked like a golden globe covered by a dense tangle of darker lines, resembling the turbulent surface of the Sun. “Here my medium is a random wave.”

Heller and I were at the opening in December of “Art and the quantum moment” – a show held in the art gallery at the Simons Center for Geometry and Physics at Stony Brook University in New York. Curated by Lorraine Walsh, it was inspired by a book I co-authored with Alfred Scharff Goldhaber called The Quantum Moment: How Planck, Bohr, Einstein, and Heisenberg Taught Us to Love Uncertainty (Norton 2014). The book describes how and why the imagery and language of quantum mechanics went mainstream and came to influence art and culture.

Although Heller and the other two artists in the show – Frédérique Swist and Jacqueline Thomas – are not mentioned in the book, they each draw inspiration from quantum mechanics in different ways.

Eric Heller

Heller first got interested in photography when he was a graduate student at Harvard (1968–1973). He then started to paint as a professor at the University of California, Los Angeles, before mastering computer graphics while on sabbatical at Los Alamos National Laboratory in 1981. Heller’s artistic activity really took off, however, when he moved to the University of Washington in 1984 and began to graphically reimagine some of the phenomena he was examining (see www.ericjhellergallery.com).

“I was studying branch flow,” he told me, “or what happens when you launch a series of electrons from a single point at slightly different angles and let them flow over the equivalent of a bumpy floor.” Heller recalled that he started to play with the patterns graphically. He pointed to Transport XIII, which shows a dense set of differently coloured electron paths branching out from a point, juxtaposed with a random quantum wave on the surface of a sphere. The eerie result looks like a silver moon rising over a tangle of multicoloured threads.

We walked past Heller’s eight other works in the show. These, too, are graphical reimaginings of quantum phenomena he has studied. “I choose an image that I’ve found while playing around with the media,” he explained, “then finish it in Photoshop without making significant structural transformations to the original images.” However, Heller admits to being quite selective. “I’m very choosy. Some of these images would not work for me at all if the colour changed by 1%.” All his images nevertheless have science behind them. “I’m not a mathematical tourist,” he insists.

When I asked him what that meant, Heller drew a comparison. “It’s the difference,” he said, “between a tourist with a good camera taking a picture of some pretty piece of landscape and hanging it on the wall – and a geologist who is an expert in the landscape who recognizes that a particular formation illustrates a fundamental part of the region’s geology and is also pretty, taking a picture of that, and hanging it on the wall.”

Frédérique Swist

Frédérique Swist, senior designer at IOP Publishing, which publishes Physics World, contributed two works to the exhibit, and also came for the opening. Her work (fredswist.co.uk/gallery.html) often illustrates properties of wave interference, resonance, oscillations, and excitation functions. Swist says she often begins with technical graphs, then – unlike Heller – significantly transposes and reconstructs them.

One of her works on display was Excitable Waves (2010), which reinterprets the distribution of excitation functions in a study of biological cells adhering to physical surfaces. It was inspired by a figure in a paper published in the journal Physical Biology in 2009. “As I worked on it and added different lines and levels of colour,” Swist told me, “it acquired more layering and depth, and grew to become an image in its own right.”

Good Vibrations – Swist’s other piece in the show – was inspired by a project exploring the theme of affirmation and positivity that she was asked to join. In it Swist drew on her visual vocabulary of graphs and diagrams to produce a startling set of optical effects that she called a “visual rhythm”. Although depicted on a flat plane, the forms create a 3D effect of several superimposed planes that seem to vibrate, even expand and contract.

Jacqueline Thomas

A plinth in the gallery displayed two handmade, limited-edition books by British artist-designer Jacqueline Thomas from the Stanley Picker Gallery at Kingston University (www.jacquelinethomasbooks.co.uk). Equations (2005) and Constants (2006) consist of hand-sewn and hand-bound pages on which Thomas has created “digital collages”, fitting graphics with equations and spelled-out constants together in a puzzle-like fashion until the result looked right to her.

“I am fascinated by the knowledge that complex explanations can be simplified into a series of numbers and symbols,” Thomas told me. “As someone who recognizes the significance of the equations and constants, but without any real understanding, I can only respond to their beauty in visual terms. However, they were carefully selected by someone who understands and refers to the ‘beauty of mathematics’ – my astrophysicist daughter.”

The critical point

Classical mechanics has long had an impact on graphic arts, which persists to this day in the form of maps of self-similar systems associated with a relatively recent branch of mechanics, chaos theory. It is not surprising that the weirdness of quantum mechanics is also influencing graphic artists – be it Heller’s reimaginings of quantum phenomena, Swist’s transformations of representations of the incredibly precise and reproducible patterns, or Thomas’s collages based on the imagery of the language in which quantum language is expressed.

Big data offers biomedical insights

By Susan Curtis in Baltimore, US

At the 59th annual meeting of the Biophysical Society today, Rommie Amaro of the University of California, San Diego, highlighted the power of computational methods to speed up the discovery of new drugs to treat diseases as diverse as flu and cancer. Amaro focused on a recent project conducted while she was at the University of California, Irvine, to identify compounds that could play a vital role in future anti-cancer drugs by helping to reactive a molecule called p53 that is known to inhibit the formation of cancer cells.

(more…)

Geophysicists blast their way to the bottom of tectonic plates

A thin, low-viscosity layer at the base of a tectonic plate has been imaged at a depth of 100 km beneath North Island, New Zealand, by an international team of researchers. Using underground dynamite explosions, the high-resolution seismic imaging – which has revealed previously unknown information about what happens underneath tectonic plates – may help to explain how tectonic plates are able to slide.

While the movement of tectonic plates across the Earth’s surface has long been studied, some underlying features of the process are not understood, leaving many questions unanswered. Developing an improved understanding of the lithosphere–asthenosphere boundary is essential. Previous approaches to studying these depths have been based on the recording of seismic waves, sourced from distant earthquakes, which have been reflected from the boundary up to the surface. As the waves travel they split into both longitudinal and transverse waves, which travel at different speeds. The relative arrival times reveal the depth of reflective boundaries, while the shape of the converted waveforms offers information on each boundary’s sharpness. With a wavelength of 10–40 km, however, such waves offer only low-resolution imaging.

Slipping plates

Deploying the seismographs that will image the tectonic plate

“The idea that the Earth’s surface consists of a mosaic of moving plates is a well-established scientific paradigm, but it had never been clear about what actually moves the plates around,” says Tim Stern, a geophysicist at Victoria University of Wellington. To obtain a clearer picture, Stern and colleagues used man-made, higher-frequency seismic waves, with a wavelength of around 0.5 km, generated by setting off explosions at the bottom of boreholes 50 m deep. A similar technique is commonly used in the petroleum industry in exploration.

To measure the reflected waves, the team laid out 877 portable seismographs along a 85 km line at the bottom of New Zealand’s North Island, in a region where the 120 million-year-old Pacific Plate and Hikurangi Plateau are subducting under continental New Zealand. This region was chosen because it meets a combination of useful criteria – an oceanic plate, with a shallow-enough dip to be seismically imaged, under a continental landmass on which sizable dynamite explosions can be detonated.

Boundary reflections

Originally, the researchers had only set out to image the boundary between the subducting Pacific plate and the Australian plate that lies over it. Their aim was to learn more about the plate interface, which lies at a depth of 15–30 km, and the potential risk it poses to the nearby Wellington region. “The big surprise was getting the coherent reflections from the lithosphere–asthenosphere boundary in the first place,” says Stern. “We just happened to run long records after each explosion, and were surprised to see these much deeper (~100 km deep) reflections emerge.”

At this depth, the researchers’ analysis revealed not only a boundary (<1 km) between the plate and the underlying mantle – contrasting with previous models of a simple thermal boundary transition – but also a 10 km-thick sheared channel underneath. From the decrease in seismic velocity, the team inferred it to be a low-viscosity layer. This may possibly represent a phase change to rock with a small percentage of melt or water content, pooled by plate motion through a process referred to as “strain localization”.

Push or pull?

With similar layers having also been proposed elsewhere – including a thicker channel beneath a younger section of the Pacific plate and a possible channel at the base of a continental plate off the Norwegian coast – the researchers suggest that such channels could be a universal feature of the lithosphere–asthenosphere boundary. Such a finding would help support the proposed “slab-pull mechanism” of plate tectonic movement – allowing the plates to glide with little resistance over the asthenosphere with subduction driven by their own weight. The layer would also serve to decouple the plates from the underlying mantle, making the convection-driven theory of plate tectonics – wherein the driving force is thought to be large-scale convection currents in the upper mantle that are transmitted through the asthenosphere – less probable.

“The results are striking. Changes in seismic velocity have to occur more rapidly (over distances less than 1 km) than previously suggested in order to generate the reflection,” says Stewart Fishwick, a geophysicist at the University of Leicester in the UK who was not involved in this study. He adds that “further interpretations suggest a very thin (less than 10 km) low-viscosity channel, which has implications for the dynamics of the mantle”.

The researchers are now looking at the possibility of reproducing their study perpendicular to the current line, along the strike of the eastern North Island. In addition, the team will also be exploring how such a low-viscosity channel might form.

The research is described in Nature.

Dark matter seen in the Milky Way’s core

An international team of astronomers has found the best evidence yet that the inner core of the Milky Way contains significant quantities of dark matter. The result confirms the long-standing belief that the centre of the Milky Way is rich in dark matter, just like its outer regions. While the researchers have deliberately avoided using any specific models of dark matter in their analysis, they are confident that further studies of the galactic core could help identify which models are most viable.

Scientists first inferred dark matter’s existence from the fact that galaxies such as the Milky Way rotate faster than would be expected if they were held together by just the gravitational forces between visible matter such as gas, dust and stars. While it is apparent that the gravitational attraction of invisible dark matter is holding galaxies together, it has proved very difficult to measure the distribution of dark matter in the core of the Milky Way. This is because the complicated distribution and dynamics of conventional matter in the core makes it very tricky for astronomers to work out exactly where the dark matter should be.

In the new research, Fabio Iocco of the ICTP South American Institute for Fundamental Physics in São Paolo and colleagues in Sweden and the Netherlands have combined data from several recent observations of the Milky Way and compared it with theoretical predictions of how fast the core should be rotating.

Tricky measurements

The team looked at 2780 measurements of the motions of interstellar gas, stars and interstellar masers. These provide information about the rotation rate of our galaxy at distances between 3–20 kpc from its centre. To put this in perspective, the Sun is about 8 kpc from the centre and the vast bulk of the Milky Way lies within an 18 kpc radius. The team combined these data to arrive at the angular velocities of the galaxy at a number of different radii. The researchers then compared these figures with the angular velocities that would be expected if the galaxy contained no dark matter. This is tricky, explains Iocco, because we are inside the galaxy and moving with it, and this perspective makes it difficult to determine both the distance and the circular motion of other objects. “There is not full agreement in the literature on the exact distribution of stars in the Milky Way,” he notes.

Most researchers studying the galaxy choose “their favourite model of the morphological distribution of visible matter”, says Iocco. In this study, however, the team considered every accepted possibility in the literature, calculating the rotation curve – the rotation rate of the galaxy as a function of radius – that would be predicted by this distribution if there were no dark matter present. “None of these fit the observed rotation curve,” says Iocco, which implies that “none of the possible distributions of visible mass fit the total mass inferred in the galaxy – there is some missing mass even in the worst case”.

The team calculated the difference between the observed and theoretical rotation curves at a large number of different radii between 3–20 kpc. Differences are seen at all radii and although the statistical significance is relatively small at 3 kpc, it rises to above 5σ beyond 6–7 kpc. This figure of 5σ is considered to signify a discovery in particle physics.

Does Newtonian dynamics hold true?

This result means that there are significant quantities of dark matter well inside the 8 kpc radius of the Milky Way, provided that Newtonian dynamics holds true. This last qualification is crucial, because a minority of astrophysicists argue that the discrepancies between predicted and observed rotation curves are better explained by modifying Newtonian dynamics at large distances, rather than the presence of invisible matter (see “Gravity’s dark side”). The researchers believe, however, that by examining the galactic dynamics on comparatively small scales, their results will shed some light on this debate. Indeed, the team plans to address this issue in the future.

Jorge Peñarrubia of the Royal Observatory of Edinburgh believes that the research is an important step towards quantifying the amount of dark matter in the Milky Way. “The number of data they’ve compiled is certainly going to be crucial in trying to determine the amount of dark matter in our galaxy,” he says. However, he adds that “The next step will be to try to construct a dynamical model that can explain the motions that other people have measured. That’s going to be the difficult part.” Dan Hooper of Fermilab in Chicago agrees. “This study shows – conclusively in my view – for the first time that there is about the same amount of dark matter that we had predicted there should be in the innermost parts of the Milky Way,” he says. “It’s a pretty big step forward and one that I hope will continue as we get more information from things like the Gaia telescope, which will be able to measure even more stars with more precision.”

The research is described in Nature Physics.

  • In the following video Luke Davies of the University of Bristol explains how the presence of dark matter is inferred from the rotation of galaxies.

Physics meets biology in Baltimore

By Susan Curtis in Baltimore, US

I’m in Baltimore this week for the 59th annual meeting of the Biophysical Society. The field of biophysics has grown rapidly in recent years as physics-based techniques have opened up new ways to study and understand biological processes, but with my limited knowledge of biology I was nervous that I would feel a little out of my depth.

The first talk of the “New and Notable” symposium helped to allay my fears. Michelle Wang is a physicist at Cornell University in the US who exploits optical techniques to trap and manipulate biomolecules. While established methods can only trap a single biomolecule at a time, Wang and her colleagues have pioneered the use of nanophotonic structures that can trap multiple biomolecules in a standing wave created within an optical waveguide.

“Our optical-trapping innovation reduces bench-top optics to a small device on a chip,” Wang told physicsworld.com when the team first reported their so-called nanophotonic standing-wave array trap last year. Since then, Wang and her colleagues have been working to integrate fluorescent markers with the nanophotonic trap to track the position of individual biomolecules, and have also been experimenting with optical waveguide materials other than silicon to improve performance and enable new applications.

(more…)

Particle pioneer Val Fitch dies at 91

The US physicist Val Fitch, who shared the 1980 Nobel Prize for Physics with James Cronin, died on 5 February at the age of 91. Fitch and Cronin were awarded the prize for the discovery in 1964 that subatomic particles called K-mesons violate a fundamental law in physics known as CP symmetry, allowing physicists to make an absolute distinction between matter and antimatter.

Three of the most important symmetry operations in physics are charge conjugation, C, in which the particles are replaced by their antiparticles; parity inversion, P, in which all three spatial co-ordinates are reversed; and time reversal, T. In experiments conducted on the Alternating Gradient Synchrotron at the Brookhaven National Laboratory in 1964, Fitch and Cronin showed that the decay of K-mesons violated the general conservation law for weak interactions known as CP symmetry.

Violating symmetry

Fitch and Cronin studied long-lived K-mesons, which decay into a variety of particles including three pions. However, the physicists found that, in 0.2% of the cases, long-lived neutral K-mesons actually decay into pairs of charged pions. If CP symmetry were conserved, a neutral K-meson could not decay into two pions, so the very existence of this decay demonstrated that the electroweak force does not obey CP symmetry. The Nobel prize was given to Fitch and Cronin “for the discovery of violations of fundamental symmetry principles in the decay of neutral K-mesons”.

Born on 10 March 1923, Fitch worked as a technician on the Manhattan atomic-bomb project during the Second World War at Los Alamos, New Mexico. In 1948 he then graduated from McGill University with a degree in electrical engineering, before completing a PhD at Columbia University in 1954. During his PhD, Fitch designed and built an experiment to measure gamma rays emitted from atoms in which an electron is replace by a muon. After obtaining his doctorate, Fitch moved to Princeton University, where he remained for the rest of his career. He also served as president of the American Physical Society from 1988 to 1989.

Poetry please, a protein-folding app for your phone, and a new home for the Institute of Physics

Artist's impression of the new headquarters of the Institute of Physics

By Hamish Johnston

You may not know it, but you could be a poet.

The European Space Agency (ESA) and the Hubble Space Telescope have just launched a contest to find the best “Ode to Hubble” as part of the celebrations for Hubble’s 25th birthday. Although described as an ode, the contest is actually looking for a short video tribute to Hubble that can include verse, song, prose as well as still and moving images. The piece can either be about the telescope or one of its many discoveries. There are two age categories, one for “generation Hubble” – those born after its launch – and one for over 25s. So look to the stars and get those creative juices flowing.

(more…)

Planck pins down the end of the cosmic ‘dark ages’

The first stars and large-scale structure in our universe formed much later than previously thought, according to the latest maps and data from the European Space Agency’s Planck telescope, which has been scrutinizing the polarized fossil light from the early universe. Planck’s new timeline pinpoints when star formation began in the nascent universe. This signalled the end of the cosmic “dark ages” and knowing when it occurred will help improve our understanding of the earliest epochs of the universe.

Some 380,000 years after the Big Bang, its thermal remnant – known as the cosmic microwave background (CMB) – emerged when neutral atoms first formed and space became transparent to light. While the CMB covers the whole sky at microwave wavelengths, it also includes some detailed information in the form of variations in temperature and polarization. These variations are thought to reveal density fluctuations in the early universe, which were the seeds of the stars and galaxies that we see today.

Indeed, as the universe became neutral, it became nearly unobservable across most of the electromagnetic spectrum, as all of the emitted short-wavelength spectrum was quickly absorbed by the atomic gas. This period is referred to as the “dark ages” and prevailed until some very dense regions began to collapse thanks to gravity. This led to the formation of the first dense structures within the neutral medium.

Lighting up

Gradually, the energetic radiation emitted by these early sources ionized all of the neutral hydrogen in the universe. This is referred to as the “epoch of reionization” and is of great interest to researchers because it tells us how the clumpy, structured universe that we see today evolved from the smoothly distributed matter that existed during the dark ages.

As soon as the bulk of the universe was reionized, light at many wavelengths could travel across the universe from the early sources, revealing the edges of the universe that we see today. Electrons and protons could finally combine and form neutral atoms without being torn apart by incoming photons that would scatter off them. The CMB photons are stamped by their last encounter with dark-age electrons (called the “last scattering surface”), and this is preserved in the CMB’s polarization. As a result, the polarization of the CMB provides crucial information about the ages and locations of first stars and galaxies.

Launched in 2009, Planck mapped the entire sky in nine different frequencies until 2013. Using its Low Frequency Instrument (LFI), which covers the 30–70 GHz range and the High Frequency Instrument (HFI), which spans six frequency bands from 100 to 857 GHz, the Planck collaboration has drawn up the most intricate maps of the of the CMB to date. Apart from Planck, more than 100 experiments have studied the CMB since it was first discovered, including NASA’s Cosmic Background Explorer (COBE) satellite and its Wilkinson Microwave Anisotropy Probe (WMAP). In fact, the CMB spectrum is the most precisely measured black-body spectrum in nature.

“After the CMB was released, the universe was still very different from the one we live in today, and it took a long time until the first stars were able to form,” says Marco Bersanelli of Università degli Studi di Milano, Italy. “Planck’s observations of the CMB polarization now tell us that these ‘dark ages’ ended some 550 million years after the Big Bang – more than 100 million years later than previously thought,” he adds, explaining that, while 100 million years may seem negligible compared to the universe’s age of almost 14 billion years, the timescale has a large impact when it comes to the formation of the first stars.

Later date

Previous measurements of the CMB polarization made by WMAP seemed to suggest that the reionization began some 450 million years after the Big Bang. But this was troublesome, as deep-sky images from the NASA–ESA Hubble Space Telescope provided a census of the earliest known galaxies in the universe, which started forming some 300–400 million years after the Big Bang. However, these galaxies would not have been powerful enough to tip the universe into the reionization epoch and end the dark ages at the 450-million-year mark. Researchers would have had to invoke some more exotic sources of energy to let that happen.

Now though, thanks to Planck, the problem is significantly minimized, as the earliest stars and galaxies alone might have been enough to drive the process. “From our measurements of the most distant galaxies and quasars, we know that the process of reionization was complete by the time that the universe was about 900 million years old,” says George Efstathiou from the University of Cambridge in the UK. “But, at the moment, it is only with the CMB data that we can learn when this process began.”

This later end of the dark ages also implies that it might be easier to detect the very first generation of galaxies with the next generation of telescopes such as the James Webb Space Telescope.

The research is to be published in Astronomy and Astrophysics. Preprints of all the Planck papers are available online.

Photons simulate time travel in the lab

Physicists in Australia claim to have simulated time travel using fairly standard optical equipment on a lab bench. They say they have prepared photons that behave as if they are travelling along short cuts in space–time known as “closed time-like curves”, and add that their work might help in the long-sought-after unification of quantum mechanics and gravity. Others, however, argue that the research does little or nothing to establish whether time travel is possible in nature.

Although everyday experience suggests the impossibility of travelling backwards or forwards in time, Einstein’s general theory of relativity does not rule it out. The theory allows for loops in space–time called closed time-like curves that could be created by very powerful sources of gravity such as black holes. These structures would bring an object back to a place and a time that it had already passed through, typically via a short cut between the two separated regions of space–time known as a wormhole.

Grandfather clause

In classical physics the existence of closed time-like curves would lead to a number of paradoxes. One of the best known of these is the grandfather paradox, in which someone who has travelled backwards in time kills their grandfather while he is still young, thereby preventing their own birth. In quantum mechanics, however, such paradoxes can be avoided.

The quantum-mechanical equivalent of the grandfather paradox involves a subatomic particle that has two states – one and zero – corresponding to “alive” and “dead”. The paradox emerges if the particle started out in state one, travelled backwards in time, met a younger version of itself and then flipped the value of its earlier self to zero.

But in 1991 David Deutsch of Oxford University showed that the probabilistic nature of quantum mechanics comes to the rescue. Deutsch found that there would always be a state that a quantum particle could assume that would make the particle’s trip back in time a safe one. For example, if the particle were to start out in an equal mixture of one and zero, when flipped it would remain in that state – a 50:50 mixture of one and zero.

Disappearing down a wormhole

In the latest work, Martin Ringbauer and colleagues at the University of Queensland in Brisbane set out to reproduce Deutsch’s model in the laboratory. But given the absence of any real closed time-like curves in the vicinity of their lab, they were not able to directly study the interaction between younger and older versions of the same quantum particle. Instead, they used two separate particles. The idea is that the “younger” particle remains in normal space–time, while the “older” one disappears down a simulated wormhole, reappears in the “past” and then interacts with its junior partner.

To implement their scheme, the team generated pairs of single photons by shining a laser beam through a nonlinear crystal. The younger photon was encoded by polarizing it – with horizontal polarization representing zero, vertical polarization representing one and intermediate polarization representing superpositions. That photon then interfered with its older partner in a beamsplitter, and the outcome was recorded by a pair of detectors.

Consistency condition

One of these detectors constitutes the entrance to the “wormhole” and is used to record the state of the older photon to ensure that is in the same state as it is at the beginning of the experiment – the point at which it emerges from the wormhole. In this way, the scheme satisfies the “consistency condition” that Deutsch imposed on his model to remove the paradoxes from time travel – that whatever goes into a wormhole emerges from it unchanged.

Encoding the younger photon arbitrarily with one of 32 different polarizations and fixing the state of the older photon to satisfy the consistency condition, the researchers showed that they could indeed meet this condition. They also found that the presence of a closed time-like curve allows an observer to perfectly distinguish non-orthogonal states of the time-travelling photon, such as horizontal and diagonal polarizations. This is something that cannot normally be done in quantum-mechanical systems.

Encryption buster

According to project leader Tim Ralph, this result suggests a way to break quantum encryption, since any eavesdropper with access to a closed time-like curve would in principle be able to make a perfect copy of the secret key and so avoid revealing his or her presence via quantum measurements. More broadly, he says, the research could provide an insight into the tension between quantum mechanics and general relativity, given that closed time-like curves are only possible with strong gravitational curvature.

Todd Brun of the University of Southern California describes the work as a “very nice experimental demonstration of some of the bizarre consequences” of Deutsch’s model, although he says that the research is not able to test the model itself. Others, however, are more critical.

Entirely predictable

Charles Bennett of IBM says that the results from the experiment are “entirely predictable from well-established principles of quantum optics” and that what is instead needed is “continued theoretical exploration of closed time-like curves’ consistency with, and consequences for, other parts of physics”. He also believes that the experimental set-up “does not function as a mechanism for reliably distinguishing non-orthogonal states”. This is a view shared by Antoni Wójcik of Adam Mickiewicz University in Poland, who says that the experiment “provides very interesting confirmation of standard quantum mechanics but “does not answer any question” concerning time-travelling quantum particles.

Also critical is Seth Lloyd of the Massachusetts Institute of Technology, who has developed a rival model to Deutsch’s. He points out that in the latest experiment there is no physical connection between what comes out of and goes into the wormhole, and that as a result the wormhole’s output has to be classically computed and then manufactured. “This defeats the purpose of quantum simulation,” he says, “which is to predict what one can’t simulate classically.”

The research was first described in Nature Communications and the paper is now available on the arXiv preprint server.

Copyright © 2026 by IOP Publishing Ltd and individual contributors