Skip to main content

Arctic climate continues to concern scientists

When the US National Oceanic and Atmospheric Administration (NOAA) released its 14th annual Arctic Report Card on Tuesday, scientists at the annual meeting of the American Geophysical Union (AGU) in San Francisco, California weren’t expecting good news. Although 2019 set no climate records, the ongoing warming of the Arctic continues to be a worry, and it is having an increasing and direct impact on the climate at lower latitudes. Speaking at the meeting, NOAA’s acting deputy administrator, Admiral Timothy Gallaudet, called attention to “the speed and trajectory of the changes sweeping the Arctic, many occurring faster than anticipated”.

The Arctic Report Card is an internationally authored and peer-reviewed compilation of 12 chapters. This year, it was written by 81 scientists, many of whom were present as the report was unveiled in a packed conference room. Among its major findings were:

  • The average annual surface air temperature over land north of 60°N between October 2018 and August 2019 was the second warmest since record keeping began, continuing a trend that began around 1980.
  • Average sea surface temperatures in August have been warming since 1982 for most ice-free regions of the Arctic Ocean.
  • The extent of sea ice at the end of summer 2019 was tied for the second lowest level since 1979, when satellite measurements began.
  • Ice older than four years, which constituted one-third of the ice cover in 1985, is now reduced to just 1.2%.
  • Greenland has lost nearly 267 billion tonnes of ice per year since 2002, with near record loss in 2018–2019.
  • Snowmelt began exceptionally early over the Canadian Arctic and Alaska in March 2019.

This year’s report card pays particular attention to the Barents Sea. In addition to the usual data, for the first time, the report includes an essay written by elders of eight indigenous coastal communities, who describe the impact of climate change on their lives and those of the people in some 70 other villages. A resident of one community spoke at a press conference at the AGU meeting, while several elders participated in a NOAA-organized town hall discussion for scientists.

In the report, the elders note that in the northern Bering Sea, sea ice used to be present for eight months a year. Today, residents may only see three or four months with ice. They are used to assessing ice thickness in feet, but now, they say, they are often looking at inches, even in the middle of winter.

The disappearance of the ice has had far-reaching effects. During January and February 2019, at least 15 storms caused unprecedented shoreline flooding to coastal communities. Southerly winds further reduced sea ice extent. The elders also report that interrelated ecosystems are failing. For example, seabird populations, including auklets and murres, are declining, and necropsies of birds found dead revealed empty stomachs, as their food sources disappear. Expanding fisheries are likely to further stress the Bering Sea ecosystem, the elders add.

These ecosystem changes affect the area’s human population, too. The walruses that many indigenous communities hunt for food are now migrating early. During the press conference, a resident of one affected community, Mellisa Johnson, noted drily that when traditional food is scarce, she does not have the option of stocking up at a local supermarket. Some villages have fewer residents, she added, than the number of people attending her press conference.

You can read the full Arctic Report Card and view a short video here.

Five more of the best physics breakthroughs of 2019

This week, as last week, the Physics World Weekly podcast is dedicated to our Top Ten Breakthroughs in Physics for 2019.

Last week we chatted about the first five breakthroughs on our list, and this week we take a look at the other five on our shortlist. These are: a “quantum trap” based on the Casimir effect; the first double-slit-like experiment using antimatter; a demonstration of “quantum supremacy”; a compact gravity probe; and a wearable MEG scanner for use with children.

Voter model examines how opinions spread between social networks

Michael Gastner and Kota Ishida at Yale-NUS College in Singapore have studied the voter model – a simple description of opinion formation and dynamics. They used the model to investigate social networks containing two communities that were highly connected internally, but with few connections between the two groups. Gastner describes how the number of connections between the two communities affects the time it takes for all members to reach agreement.

The research is reported in full in Journal of Physics A, published by IOP Publishing – which also publishes Physics World.

What was the motivation for the research?

A common perception about our digital age is that we live in “echo chambers”, where we mostly exchange opinions and ideas with people who are similar to us. In a society split into tightly knit communities that hardly reach out to each other, it appears ever less likely to reach a consensus on controversial issues. Brexit, immigration or climate change readily come to mind as topics that have revealed deep divisions in society. With our research, we wanted to develop a simple mathematical model that shows to what extent divisions in society lead to delays in reaching a consensus.

What did you do in the work?

We looked at a simple model of opinion formation – the “voter model”. We thought of society as a network in which people are linked to their friends. After some time has passed, people ask one of their friends about her or his current opinion and adopt it as their own. After we repeat this process often enough, everybody eventually agrees on the same opinion. This model is a classic in sociological research.

What was new in our study was the structure of the social network. While previous research assumed that society is homogeneous, we considered networks with two communities that were internally highly connected, but with few connections between the communities. In our model, the communities formed so-called “cliques”: every member of a community was connected to everybody else in the same community. This is more extreme than we find in real social networks, but it was a starting point to keep the maths tractable.

Voter model

What was the most interesting and/or important finding?

Initially we thought the time until the two cliques agree on a shared opinion would simply decrease with every additional link between the cliques. But it turns out the situation is much less straightforward than we had expected. It is true that a network in which there is only one link between the cliques needs on average twice as long to reach a consensus than a network with two such links. But when there are too many links, we found to our surprise that the consensus time increases again.

After analysing the maths carefully, we found the reason for this counterintuitive behaviour. On one hand, frequent opinion exchanges between the cliques are necessary to quickly agree on the same opinion. On the other hand, additional links give the minority clique greater influence, causing more self-doubt within the majority clique and consequently slower convergence towards a shared opinion.

Why is this research significant?

Some researchers have argued that the best way to overcome the polarization in today’s society is to develop bipartisan links that form bridges across the political divide. Our research shows that the situation is not quite so simple.  Of course, a society consisting of isolated communities will find it impossible to reach a compromise. But too many bipartisan links are in fact not necessarily helpful in this respect either. If communities lose their distinct identities, extreme opinions may make inroads into the mainstream. The result can be a prolonged tug of war between two opinions with about equally many supporters.

What do you plan to do next?

The present model allowed us to solve the maths, but real social networks are admittedly more nuanced than what we can currently account for. For example, some community members may have greater influence on the opinion dynamics than others. In the next stage of our research, we will study more sophisticated network models that mimic the heterogeneity of real societies.

The full results of the study are reported in Journal of Physics A: Mathematical and Theoretical. This research is part of Kota Ishida’s final-year capstone project in Mathematics and Computational Sciences.

Muons: probing the depths of nuclear waste

IAEA

When researchers announced in 2017 that they had discovered a previously unknown void within Egypt’s Great Pyramid, their finding generated much excitement. The 30 m-long cavity, located above the sloping Grand Gallery, prompted new speculation about how the 4500-year-old structure may have been built and whether it might contain hidden burial chambers. But the news also put the spotlight on muons – energetic subatomic particles that can pass through thick layers of dense material and which the scientists in Egypt used to look inside the limestone and granite pyramid.

Muons are generated routinely in particle colliders, where physicists use them to identify other, potentially more exotic, particles within the debris. But they are also produced naturally in the atmosphere, and an ever-growing range of researchers are using these commonly occurring muons as highly penetrating probes. Beyond archaeologists, geologists, for example, are developing muon detectors to establish when magma might be on the rise within a volcano.

There is, however, one application that is particularly well suited to muon radiography – or muography for short – because the material that needs imaging is deliberately made inaccessible to other forms of radiation. That material is nuclear waste – in particular spent fuel, which contains most of the world’s supply of plutonium.

Handling spent fuel is a complex and expensive process designed to keep workers and the public safe while guaranteeing that the fuel can be traced. The International Atomic Energy Agency (IAEA) and other organizations responsible for “safeguarding” the nuclear fuel cycle go to great lengths to ensure that even a tiny fraction of the world’s spent fuel doesn’t fall into the wrong hands and potentially end up in a nuclear weapon or dirty bomb. However, the inability to inspect the fuel directly once it has been stored in casks is a major potential weakness (see box).

Spent fuel: too hot to handle

Spent nuclear fuel pool

Nuclear fuel usually comes in the form of uranium pellets encased in metal rods that are bundled together to create fuel assemblies. As the fuel undergoes fission in a reactor, it generates plutonium isotopes and other highly radioactive substances, some of which give off lots of heat. After being removed from the reactor, the fuel is transferred to a large pool of water where it stays for several years to cool down. It is usually then placed into what are known as dry storage casks – concrete or metal canisters several metres high that are lined with a special material to absorb radiation and are stored either close to a reactor or at a special facility.

To keep track of spent fuel, IAEA inspectors can use cameras to watch fuel assemblies being loaded and unloaded from reactors and can monitor the amount of Cerenkov radiation emitted by spent fuel in pools. However, such direct monitoring is not currently possible once the assemblies are placed in casks, since the casks’ thick shielding not only blocks almost all the radiation from inside getting out but also prevents radiation from outside, such as X-rays, getting in.

The inspectors bypass this problem by placing seals on the lids of casks that reveal when they have been tampered with. However, those seals could in principle corrode or become damaged when casks are handled. With accountability of the fuel lost, Matt Durham at Los Alamos says that inspectors would ideally return casks to a cooling pool to open them and re-verify the contents. But doing so would be costly, time consuming and, given the need to transport the waste, potentially dangerous. “This comes down to moving many tonnes of highly radioactive material a long distance, which is non-trivial, to put it mildly,” he says.

Muons offer a way to establish how much waste there is in a container without having to open or move the container in question. That capability would become vital, according to Matt Durham of Los Alamos National Laboratory in the US, should inspectors or the countries involved ever lose confidence in their monitoring. “This issue is only getting worse as more plutonium piles up around the world,” he says.

Muons offer a way to establish how much waste there is in a container without having to open or move the container in question

Making tracks

Muons are fundamental charged particles created when cosmic rays (mainly protons) collide with nuclei in the atmosphere to yield pions that then decay. Having about 200 times the mass of electrons, muons emit little bremsstrahlung radiation and pass quite easily through the atmosphere – some 10,000 of them reaching each square metre of the Earth’s surface every minute. Their high energies allow them to penetrate rock and other dense matter, but they do lose energy as they ionize and excite electrons. Being absorbed after a certain distance – typically several hundred metres in rock – scientists can work out the density of materials they pass through by measuring variations in their flux.

Atmospheric muons were first used in this way back in the mid-1950s by British physicist Eric George, who measured the thickness of ice above a mining tunnel in Australia. But it was not until 2003 that Durham’s Los Alamos colleague Christopher Morris and several co-workers proposed using the scattering of muons, instead of their absorption, to image concealed dense objects, particularly nuclear material (Nature 422 277).

The idea is that muons are deflected by the dense concentrations of charge inside atomic nuclei, and the greater a material’s atomic number (Z), the bigger that deflection will be. This means that uranium, for example, will scatter incoming muons much more than lower-Z materials such as concrete and steel. Therefore, an object’s composition can be worked out by plotting the trajectories of muons before they enter the object and after they leave it. Given a long enough exposure time, the researchers reasoned, muons can be recorded over a wide range of incident angles and positions, thereby providing an accurate image of the object’s contents.

Morris and colleagues used detectors known as drift tubes, each of which contains gas and a positively charged wire that runs along its length. Any passing muon ionizes the gas where it crosses a tube, liberating electrons that register a signal at that point along the wire. By arranging adjacent layers of tubes at right angles to one another, it is possible to map a series of points along each muon’s trajectory. At least two sets of perpendicular layers on either side of an object are therefore needed to plot a muon’s incoming and outgoing vectors.

The Los Alamos researchers first demonstrated the technique by imaging a simple tungsten cylinder. They then went on to develop detectors designed to reveal nuclear materials hidden among cargo in trucks and shipping containers. Having received funding for this following the terrorist attacks in New York in 2001, they made a prototype device for a California-based company called Decision Sciences Corporation. Morris says that the firm has since begun to sell detectors within the US and overseas.

Turning their attention to spent fuel, Morris, Durham and colleagues developed two muon trackers, each consisting of 24 layers with 24 1.2 m-long drift tubes in each. To put these devices to the test, they moved them north to the Idaho National Laboratory and placed them either side of a dry storage cask taken out of a Westinghouse reactor in the 1980s. Recording nearly half a million muon tracks over the course of three months in 2016, they set out to determine which of the cask’s 24 fuel-assembly slots were full and which were empty. Doing so, they correctly identified whether or not slots were empty in four out of six slot groupings (Phys. Rev. Applied 9 044013).

Testing times

The Los Alamos group ultimately aims to show that muons can be used to identify not only whether fuel assemblies are missing but also if they have been replaced by dummy objects – made from lead, for example. This would involve combining both scattering and absorption measurements, given that muons’ deflection depends on a material’s atomic number and density whereas their absorption depends only on its density. Combining the two techniques means being able to subtract the density information from the scattering measurements, so establishing the atomic number and therefore identify the material.

The researchers have used computer simulations to show that such discrimination should be possible but must wait before they can prove this experimentally. Having been unable to identify all the missing fuel-assembly slots in their Idaho test because strong winds blew the detectors out of alignment, they had hoped to carry out new tests with the detectors fastened in place. But Morris says that the Department of Energy’s National Nuclear Security Administration hasn’t given them the necessary funds.

Fukushima Daiichi Nuclear Power Station

The group had also hoped to apply its technology to imaging damaged reactor cores, specifically those at the Fukushima nuclear power plant in Japan. Following the disaster in 2011, Fukushima’s operator, the Tokyo Electric Power Company (TEPCO), wanted to use muons to help locate melted fuel within the plant’s reactors in order to remove that fuel later on. And indeed between 2015 and 2017 the company used measurements of muon absorption to show that much of the fuel in reactor units 1 to 3 had fallen out the respective reactor cores.

According to Morris, absorption radiography is not ideally suited to the task, given the similar densities of nuclear fuel and water. He and his colleagues at Los Alamos therefore worked with physicists at Toshiba to develop scattering detectors, which they tested successfully at a small Toshiba reactor in Yokohama. However, this technology was considered too expensive and disruptive to be used at Fukushima. “We were very disappointed not to make measurements,” says Morris.

In Europe, however, interest appears to be growing. That is particularly true in Germany, which is due to close all of its remaining nuclear power plants by 2022 and as such will need to remove and store large amounts of spent fuel. Durham points out that once all the plants in the country have been closed there will be no cooling pools available to look inside casks, and he doubts that France or any other nuclear-powered neighbour would agree to inspect the waste. “Everybody has enough problems dealing with their own,” he says.

As such, German scientists are investigating ways of looking inside dry storage casks – in particular muography. Katharina Aymanns of the Jülich Research Centre near Cologne, who is co-ordinating the initiative, says this is particularly important given that Germany is unlikely to have a permanent repository until at least 2050. The idea, she says, is “to make sure that all the fuel assemblies that you think are in a cask are actually in a cask”.

To this end, Aymanns contacted Paolo Checchia at the Legnaro National Laboratory in Padua, Italy. Checchia and colleagues are also working on muography detectors made from drift tubes – in this case based on technology developed for the CMS detector at the CERN laboratory in Geneva. They have done simulations showing that such technology should make it possible to spot when a fuel assembly has gone missing from a storage cask, and have built a small (2 m × 0.5 m) prototype detector consisting of 64 drift tubes in eight layers.

Muon detector

Last year, Checchia and co-workers carried out tests partly funded by Euratom, the body responsible for spent-fuel inspections within Europe. Those tests involved placing the prototype device next to a “Castor” cask at the Neckarwestheim nuclear power plant near Stuttgart. Although not able to study the contents of the cask in any detail, the researchers showed that they could detect muon tracks above the small amount of radiation emitted by the cask itself, and also demonstrated that the cask blocked some of the incoming muons. To identify missing fuel assemblies they next hope to perform tests using a pair of larger trackers.

Other countries that might benefit from muography include Finland and Sweden, which are both developing geological waste repositories. Morris and colleagues carried out simulations showing that 24 hours of muon scattering measurements would be enough to identify any missing or replaced fuel assemblies within the copper and iron casks that will be used in Scandinavia. The researchers write that such measurements, carried out immediately before burial, would represent “the last chance for inspectors to evaluate state declarations of spent fuel disposal”.

Drums of waste

In addition to monitoring spent fuel itself, however, muon technology can also be used to image other types of nuclear waste, including the metal cladding that is stripped from fuel rods before spent fuel is reprocessed. The UK alone is home to many tens of thousands of 500 litre stainless-steel drums containing this cladding material immersed in concrete.

In addition to monitoring spent fuel, muon technology can also be used to image other types of nuclear waste

Ralf Kaiser of the University of Glasgow says that some of the drums have been found to bulge, which means that they might contain fragments of spent fuel – given that uranium increases in volume when it oxidizes. But since the drums are not easy to get at, he explains, it is not known how many might be affected. He says that because the drums could potentially crack open, identifying such fragments early on could prevent harmful and expensive radiation leaks.

To inspect such drums, Kaiser and colleagues at Glasgow have developed muon detectors consisting of thousands of plastic scintillating fibres (Philosoph. Trans. Roy. Soc. A 377 20180048). Working with scientists from the National Nuclear Laboratory at the Sellafield reprocessing plant, the Glasgow group developed a small prototype and then a full-scale system large enough to house one 500 litre drum. Carrying out a series of laboratory tests from 2015 onwards, the researchers showed they could image a range of objects inside a concrete drum, including a 3 cm-long cylinder of uranium.

After seven years of research, the Glasgow group was assigned intellectual property rights for the work and in 2016 set up a company called Lynkeos Technology. The firm has since embarked on its first commercial contract: to image vitrified waste generated using a new technique known as GeoMelt. Having shown in the lab that passing muons reveal how uniformly the waste melts and whether it contains any metallic items, the group has been using the detector on-site at Sellafield since October 2018 to image waste containing radioactive material, including uranium. It is now preparing to image larger (3 m3) waste boxes.

Inspection of waste other than spent fuel – be that fuel cladding or contaminated spanners, for example – is also the aim of researchers at the universities of Bristol and Sheffield in the UK, and Warsaw Technical University in Poland. This collaboration is currently commissioning a 5 m-high detector based on resistive plate chambers, which pick up ionizations in gas using metallic strips placed outside glass plates. As to where the device could be tested, group leader Jaap Velthuis of the University of Bristol says that several utilities in Europe are interested but none has committed so far. “It’s not easy to get on sites with nuclear waste and ask them to get their drums out,” he points out.

Indeed, given the sensitivity that surrounds the subject of nuclear waste, developing any new technology that could help deal with it is not an easy process. That is particularly true when it comes to spent fuel, and it remains to be seen just how willing safeguard agencies will be to adopt muon detection. The IAEA say that it does not currently use the technology for nuclear verification, “as there are still limitations on its use for this purpose”. However, it adds that it is “following research activities in this area”.

Aymanns underlines that it is for the IAEA and other safeguard organizations to decide whether and when they field a particular technology. In the meantime, she says, scientists “can help just by doing field tests”. But she has little doubt that new technology is needed. “Even if you have surveillance you have to be prepared for it to fail,” she says. “Maybe it will never happen but if it ever does you have to be ready.”

Electrons flow like water in ultra-pure graphene

electron river

Electrons can behave like a viscous liquid as they travel through a conducting material, producing a spatial pattern that resembles water flowing through a pipe. So say researchers in Israel and the UK who have succeeded in imaging this hydrodynamic flow pattern for the first time using a novel scanning probe technique. The result will aid developers of future electronic devices, especially those based on 2D materials like graphene in which electron hydrodynamics is important.

We are all familiar with the distinctive patterns formed by water flowing in a river or stream. When the water encounters an obstacle – such as the river bank or a boat – the patterns change. The same should hold true for electron flow in a solid if the interactions between electrons are strong. This rarely occurs under normal conditions, however, since electrons tend to collide with defects and impurities in the material they travel through, rather than with each other.

Making electrons hydrodynamic

Conversely, if a material is made very clean and cooled to low temperatures, it follows that electrons should travel across it unperturbed until they collide with its edges and walls. The resulting ballistic transport allows electrons to flow with a uniform current distribution because they move at the same rate near the walls as at the centre of the material.

If the temperature of this material is then increased, the electrons can begin to interact. In principle, they will then scatter off each other more frequently than they collide with the walls. In this highly interacting, hydrodynamic regime, the electrons should flow faster near the centre of a channel and slower near its walls – the same way that water behaves when it flows through a pipe.

Extremely clean 2D materials

In recent years, researchers have created extremely clean samples from 2D materials such as graphene to act as testbeds for studying electron hydrodynamics. The vast majority of this work, however, involved measuring electron transport, which only probes the physics of electrons at fixed positions along the perimeter of the device.

“Hydrodynamics, on the other hand, brings to mind dynamic images of electrons swirling around with interesting spatial patterns,” says Joseph Sulpizio, who is one of the lead authors of this new study. “Such patterns have been predicted in theory but never imaged spatially.”

Poiseuille current profile

Sulpizio and the other researchers, led Shahal Ilani at Israel’s Weizmann Institute for Science in collaboration with Andre Geim’s group at Manchester University, have now imaged the most fundamental spatial pattern of hydrodynamic electron flow for the first time. They obtained this parabolic or Poiseuille current profile by studying electrons travelling through a conducting graphene channel sandwiched between two hexagonal boron nitride layers equipped with electrical contacts.

Under an applied electric field, the electrons produce a voltage gradient along the current flow direction. Unfortunately, this local voltage gradient is the same for both hydrodynamic and ballistic electron flow and so cannot be used to distinguish between the two regimes. Ilani and colleagues overcame this problem by applying a weak magnetic field to the sample, which produces another voltage – the Hall voltage – perpendicular to the direction of the current. The gradient of this voltage is very different for hydrodynamic and ballistic flow.

The researchers imaged the Hall voltage profile for both flow regimes using a scanning probe recently developed in their laboratory. This ultraclean carbon nanotube single-electron-transistor-based device is held at cryogenic temperatures and is extremely sensitive to local electrostatic fields. The current flowing through it is thus indicative of the local potential of the sample and voltage gradients associated with the Hall voltage.

By measuring this current, the team was also able to observe the transition between the regime in which electron-electron scattering dominates and that in which the electrons flow ballistically. “As expected, we observed a flat Hall field profile across the graphene channels at low temperatures,” Sulpizio tells Physics World. “Upon heating, however, the profile becomes strongly parabolic, revealing less current flow near the walls and more near the centre, which indicates the transition to hydrodynamic/Poiseuille flow.”

Implications for device development

The implications of the work, which has been published in Nature, are many, he says. Electron hydrodynamics only emerges at elevated temperatures (in contrast to many other kinds of electronic phenomena that exist only at very low temperatures) and this will be relevant for technological devices like computer chips that operate at room temperature. It will also be relevant in 2D van der Waals heterostructures like those made from graphene, and especially when they are super-clean. This behaviour is likely to play an important role in new generations of devices made from these materials.

“Looking further ahead, it might even be possible one day to engineer fundamentally new kinds of electronic devices that directly exploit electron hydrodynamics,” Sulpizio says. “When electrons interact hydrodynamically, their viscosity results in highly non-local spatial flow patterns that might be technologically advantageous.”

A decade of Physics World breakthroughs: 2016 and 2017 – the rise of gravitational-wave observation and multimessenger astronomy

One of the biggest scientific discoveries of the past decade – possibly even the last century – was the first ever direct observation of gravitational waves by the LIGO Scientific Collaboration, winners of the Physics World 2016 Breakthrough of the Year.

On 15 September 2015, ripples in space-time produced by two merging black holes were detected by LIGO’s newly upgraded twin interferometers in Livingston, Louisiana, and Hanford, Washington, US. “I’d been working on LIGO for a little over 16 years at that point,” explains Amber Stuver of Villanova University in the US, “and to have a detection roll in on the first day of observation with Advanced LIGO was beyond shocking. I’d been expecting a detection within the first year, not on the first day!” The resulting chirp-and-ringdown waveform LIGO presented to the public in February 2016 provided evidence for one of the last unverified predictions of Einstein’s theory of relativity. And it wasn’t a one-off – just four months later, in June 2016, the collaboration announced they’d observed a second binary black hole coalescence on Boxing Day 2015.

Naturally, for such ground-breaking discoveries, the LIGO collaboration was awarded our Breakthrough of the Year prize for 2016. But those historical observations were just the beginning for LIGO. As my colleague Tushna Commissariat said in the award announcement, the measurements heralded the start of the era of gravitational-wave and multimessenger astronomy.

On 4 January 2017, LIGO made their third observation of a black-hole merger, and on 14 August 2017 they made their first joint detection of gravitational waves with the Virgo Collaboration in Italy. Rainer Weiss, Barry Barish and Kip Thorne also won that year’s Nobel Prize for Physics for decisive contributions to the LIGO detector and the observation of gravitational waves.

Multimessenger breakthrough

But it was the announcement on 16 October 2017 that took over the science headlines once again. Not only had LIGO and Virgo detected gravitational waves from a neutron-star merger for the first time on 17 August, but the Fermi Gamma-ray Space Telescope picked up gamma-rays from the same event. The observations prompted astronomers to look at the signals’ origin using dozens of telescopes and detectors around the world and in space, and the aftermath of the merger was captured across the electromagnetic spectrum. “We were able to provide observational evidence that the long-hypothesized neutron-star merger is indeed a source for short gamma-ray bursts,” says Stuver. “Then we got to sit back and watch the light show as traditional electromagnetic astronomers revealed the evolution of the resulting kilonova.”

The coordinated observations provided huge amounts of information about what happens when neutron stars merge, clues about how heavy elements are produced in the universe, and a new way of measuring the expansion rate of the universe. It was a huge milestone for multimessenger astronomy and epitomized the collaborative nature of science, winning the international team of thousands of researchers our 2017 Breakthrough of Year.

Black holes

Since that very first observation of gravitational waves in September 2015, LIGO and Virgo have made a total of 11 confirmed detections and they’ve given us many new insights into our universe, but there’s still a lot gravitational waves could reveal. The researchers hope that, among other things, more observations will reveal the elusive intermediate-mass black holes (100–10,000 solar masses) and provide clues about the mass gap between the heaviest neutron stars and the lightest black holes. “There is a lot of interest on why we haven’t seen any compact objects above three solar masses but less than five solar masses,” explains Stuver. “This has implications for our understanding of how the most massive stars die.” More observations of gravitational waves from black-hole mergers also means scientists can begin to identify patterns in their properties that can reveal how the binary system came to be in the first place.

Burst gravitational waves

There are also sources of gravitational waves that are yet to be seen. So far, all confirmed detections have originated from compact binary coalescence. Stuver expects that the next thing to be found will be burst gravitational waves that come from sources that are either unexpected or we don’t understand well enough to accurately model. “Looking for burst gravitational waves is looking for the unexpected,” she says, “and these kinds of discoveries have an enormous potential to reveal concepts that will revolutionize our understanding of the universe.”

In order to reach these goals, the LIGO and Virgo detectors undergo regular upgrades between observing runs. “One of my personal favourites,” explains Stuver, “has been the development of injecting squeezed light states into the detectors to decrease the shot (quantum) noise and increase the sensitivity of LIGO.” There are also new detectors on the horizon. KAGRA, an interferometer built inside of the Kamioka Observatory in Japan, is set to join LIGO and Virgo by the end of the current observing run, and by around 2025 the LIGO-India observatory should also be operational. “Having more detectors will allow us to better localize the source location on the sky and resolve gravitational wave polarization information,” Stuver says. Then there’s LISA – ESA and NASA’s space-based gravitational-wave observatory, which will build on the success of LISA pathfinder and LIGO. LISA will look for gravitational waves with much longer wavelengths than the Earth-based observatories, opening up the range of possible sources.

There’s no denying that its been a very successful few years for the field of gravitational-wave observation, and multimessenger astronomy – and there’s bound to be a lot more. “I am so awed that I have been privileged to be a part of this amazing collaboration of scientists from around the world,” Stuver concludes, “and can’t wait for what we will discover in the future.

Undersea optical fibre monitors seismic activity and ocean waves

The expansive global network of undersea fibre-optic telecommunications cables could be used to measure offshore earthquakes and reveal geological structures beneath the ocean. That is the claim of  researchers in the US, who demonstrated the concept by using a section of cable in California’s Monterey Bay to detect a small earthquake. They were also able to map out previously unknown geological faults and detect the seismic signals generated by ocean waves.

Around 70% of the Earth’s surface is covered by oceans and underwater seismic signals could provide information about offshore seismic hazards, underwater volcanoes, ocean movements and marine life. Today, however, our capacity to measure these signals is very limited and relies mainly on expensive and difficult-to-place monitoring stations on the ocean floor – which form a very patchy network.

“There is a huge need for sea floor seismology. Any instrumentation you get out into the ocean, even if is only for the first 50 km from shore, will be very useful,” says team member Nate Lindsay of the University of California, Berkeley.

In their new study, Lindsay and colleagues teamed up with Craig Dawe of the Monterey Bay Aquarium Research Institute, which owns a 51 km underwater fibre optic cable that was placed on the seabed in 2009 to link to the central node of a network of oceanographic sensors. At the time of the study, in the March of last year, the cable was offline for its annual maintenance, freeing it up for the team’s seismic experiment.

Distributed acoustic sensing

To convert the cable into a seismic measuring tool, the team used a technique known as distributed acoustic sensing. This involves sending short pulses of laser light through the cable. If the cable is stretched and deformed – even by a tiny amount – light is backscattered up the cable and picked up by a detector.

Using interferometric techniques, the researchers could measure the backscatter from individual 2 m sections of cable. In this way, the team used the first 20 km of the cable to create a sequence of 10,000 individual motion sensors that can monitor seismic activity and environmental noise in the ocean with an unprecedented resolution and scale.

“These systems are sensitive to changes of nanometres to hundreds of picometres for every metre of length,” says team member Jonathan Ajo-Franklin, who is based at Lawrence Berkeley National Laboratory. He adds, “That is a one-part-in-a-billion.”

Using the cable, the researchers recorded a broad range of seismic activity across four days in the Monterey Bay area – including a magnitude 3.4 earthquake with an epicentre 45 km inland. Ocean and storm waves, the measurements of which were shown to match recordings from buoys and land-based seismometers. The team were even able to use seismic scattering to map out the local San Gregorio Fault system and detect previously unknown faults within it.

“Frontier of seismology”

“This really is a study on the frontier of seismology, the first time anyone has used offshore fibre-optic cables for looking at these types of oceanographic signals or for imaging fault structures,” said Ajo-Franklin.

“This research shows the promise of using existing fibre-optic cables as arrays of sensors to image in new ways, said Berkeley geophysicist Michael Manga, who was not involved in the study. “Here, they’ve identified previously hypothesized waves that had not been detected before.”

The approach will also be of great interest to volcanologists, notes Helmholtz Centre Potsdam geoscientist Philippe Jousset. “80% of volcanic activity occurs in the oceans,” he explains. “Because of inaccessibility, little is known about the characteristics of volcano structures and what they exude.”

According to Ajo-Franklin, the ultimate goal is to use the millions of kilometres of fibre-optic cables that weave their way across the world — both on land and underwater — to form a global seismic tracking network. This would be a low-cost way to extend our geophysical monitoring capacity not only to the oceans, but also to those regions on land that, unlike earthquake-prone California — do not have their own abundance of expensive seismic ground stations.

The team must first, however, show that their approach can be deployed on cables that are in use, rather than sitting dark like the Monterey cable. They will have to demonstrate that the laser pulses can travel through the cables without disrupting normal data transmission. The researchers are also doing further analysis of their Monterey Bay data and planning another test of the approach in a geothermally-active area south of California’s Salton Sea.

The research is described in Science.

Closed-loop control may improve quality-of-life for diabetic patients

Researchers from the University of Virginia have published the latest results from the International Diabetes Closed-Loop trial (N. Engl. J. Med. 10.1056/NEJMoa1907863). Their findings suggest that applying a closed-loop approach to the administration of insulin in patients with type 1 diabetes could be a viable alternative to current methods. This could change the life of patients for whom the automatic and efficient monitoring of glucose levels is paramount to control the numerous complications associated with the progress of their condition.

The burden of diabetes worldwide has quadrupled in the last 40 years, particularly in middle- to low-income countries, and is expected to continue to rise in coming years. As a consequence, the incidence of diabetes-related blindness, kidney failure, heart attacks, stroke and lower-limb amputation is likely to keep increasing too. Fortunately, diabetes complications are treatable and can be avoided or delayed by adopting a correct diet, performing physical activity and medicating diabetes-related complications as they appear (according to the World Health Organization).

However, despite the constant efforts of physicians and researchers, maintaining a steady glycaemic level in patients affected by diabetes remains elusive. The current gold-standard for diabetes monitoring and treatment consists of constant glucose monitoring and sensor-augmented insulin pumps. These allow the patient to check glycaemia, regulate the basal insulin release from the pump and program the release of additional insulin doses – called boluses – before or after meals.

In this randomized, six-month long study, the researchers assessed the efficiency of a recently commercialized, automated closed-loop system called Control-IQ for the release of insulin. The system integrates live glucose monitoring with an insulin release pump for long-term modulation of glucose level.

More efficient than existing devices

The trial showed highly encouraging results. The researchers compared the length of time that the novel device kept patients’ glucose levels within a safe range (70 to 180 mg/dl) with that of another commonly used insulin pump. They estimated that patients receiving insulin through the new closed-loop system benefited from an average of 2.6 hours longer normal glucose levels per day than the control group.

The study also examined a plethora of secondary outcomes, including mean glucose level, glycated haemoglobin, time spent in hypoglycaemia (glucose levels below 70 mg/dl) or severe hypoglycaemia (below 54 mg/dl), and time spent in hyperglycaemia (above180 mg/dl). Importantly, the closed-loop system was superior to the insulin pump in all of these secondary outcomes, too.

On the other hand, the tested closed-loop device showed an increased number of adverse events. Patients using this system were significantly more subject to hyperglycaemia or ketosis events (without diabetic ketoacidosis) than the control group. The team attributed these defects to failures in the infusion set, which could reflect the difference in requirements for adverse event reporting between the two groups, as the insulin pump in the closed-loop system was part of an investigational device.

Strength in numbers

One strength of this study was the large number of participants, representing a wide range of variables: they were aged from 14 to 71 years old; had had diabetes for one to 62 years; and their baseline glycated haemoglobin ranged from 5.4 to 10.6%. The authors concluded that the tested closed-loop device led to a significant improvement of glycaemia levels and glycated haemoglobin during the six-month trial, suggesting that closed-loop systems may provide an interesting option to consider when deciding on a patient’s diabetes treatment.

Flying the flag for wind energy

Wind energy is on the up globally with 1000 GW likely to be in place soon. Longer term, if on-land and offshore sites get congested then there is the prospect of using airborne wind-energy devices or tethered kites that operate at high altitude. Wind speed increases with height and one quite cautious study suggested that around 7.5 TW might be available globally. Others have put the total upper atmosphere resource, including jet streams, much higher at 1400 TW. Indeed, there is certainly a lot of energy up there.

However, extracting even a small part of it would be very difficult. If significant amounts of energy were somehow obtained then it could have large environmental/climate impacts, for example, on the jet stream that shapes the weather system. No one is currently proposing interventions at that level. But extracting energy from lower down in the atmosphere — perhaps up to about half a kilometre — is possible and would still offer a very large new resource — possibly four times current wind-production capacity.

Tethered to the ground

The US government have been looking at the idea of airbourne wind energy seriously as have several large companies. One early US “flying wind” design was proposed by SkyWindPower Corporation. The rotors of a drone-like system provide lift that is aided by a small aerofoil. By pulling against a ground-linked tether it generates on-board power that is then sent to the ground along the tether cable. Flying around 600 m high, the cable has to be strong enough to resist high-drag forces.

In 2014, Google invested in another airborne power generator design that was developed by US company Makani. Its 600 kW prototype — Airborne Wind Turbine (AWT) — has an aerofoil-section wing that is tethered to the ground. It flies in a large vertical circle in the stronger winds at 250-600 m altitudes, with rotors acting as both turbines and propellers, as with SkyWind. Direct-drive generators then send electricity down its tether to a ground station. The AWT can be reeled in and then reeled out again during extended periods of low wind, bad weather or for maintenance.

Makani says that, given the higher wind speeds at altitude, their AWT will deliver about twice the energy per unit of capacity than conventional turbines (i.e. its load factor would be around 60% instead of 30%). This would allow it to produce power at half the cost, and with no tower it would use 90% less material, so reducing capital costs.

There is still some way to go with these novel technologies, but if the risks to aircraft and people can be avoided, it really could be the case that the sky is no limit

A different “reel-in-and-out” tethered-kite-type approach has also been developed by the US-Indian company Skymill. This has a giant “autogyro” rotor that is tethered to the ground and operates like a yo-yo — rising and falling to rotate a winch drum on the ground that would generate power. Something similar has also been developed and tested by Kite Gen in Italy, with a large horse-shoe shaped hang glider-like kite made to swoop around in a giant circular pattern, differentially tugging on a winch-linked tether to generate power on the ground. With the (heavy) generator on the ground and less mass in the sky, this type of configuration might be more economic than the sky-borne rotor-generator systems.

Longer term, some look to large arrays of Kite Gen-type systems, as in Connaught Energy’s  32 MW concept, which offer a much better “energy returns on energy invested” (EROEI) due to the lower material requirement. Kite Gen have talked of EROEIs of maybe 100:1 and perhaps 300:1 or more, which compares to the 80:1 for the best conventional surface-wind turbines.

There will, however, be issues of secure tethering and power take-off. There will also be a need to avoid conflicts with air traffic, and crash risks. But in some remote locations — especially offshore — that may not be a problem with major energy companies like Shell and E.ON taking an interest in offshore applications.

Indeed, Shell has backed a UK variant developed by Kite Power Systems, in which a giant hang glider-like kite pulls on a long tether that — as with Skymill and Kite Gen — spools out from a drum on the ground which rotates to generate power. In the Kite Power version, when the kite reaches full operating altitude (perhaps 300 m or so), it is stalled and retracted to a low level with the winch system pulling it down and taking up the slack so it ready for another pass up to full altitude. A second kite, meanwhile, rises so that net power can be generated by the double-kite system continuously. Following tests in Scotland of a 500 kW system there are plans for onshore and offshore 3 MW systems.

There are also some hybrid versions. Swiss start-up Skypull has developed a multi-copter drone that can take off and land by itself, with no need for a launcher or ground wind. The take-off is battery powered, but once in the air, the battery is recharged every time the kite loops back down towards the ground, flying between 200 to 600 m high.

Niches and beyond

Flying wind may at first glance seem unlikely, but some initial sceptics have been converted and there seems to be a potential for novel ideas. Certainly, there are many promising contenders. Engineering consultants IDTechex examined the prospects for 19 of the most interesting systems and forecast the first sales of 30-100 kW systems. They even say that airborne wind energy “will be a multi-billion-dollar business in 20 years”, with scope for more beyond that.

It may be that airborne devices will initially be limited to niche applications. For example, there have been designs for dirigible balloon-type devices, which could be used on or offshore. In one variant, which looks suited for use in remote locations, a ‘doughnut’ shaped balloon has a central ducted rotor.

However, some analysts are more optimistic about the wider prospect for flying systems. There certainly are some fundamental attractions. In a conventional wind turbine, most of the energy is captured by the outer tips of the blades, which move at the highest speed. The rest of the blade is less used, mostly just supporting the high-yield tip sections. That – along with the towers — add a lot of expensive mass.

By contrast, towers are not needed for flying devices and for reel-in-and-out kite systems, all the aerofoil surfaces operate at the same high speed. Although the aerofoil-mounted drone-turbine configuration has weight penalties, it has the advantage that it can fly itself into position into high-wind-speed zones. Unlike with kites, you don’t have to wait for strong ground winds for lift-off or to tow it into place.

There is still some way to go with these novel technologies, but if the risks to aircraft and people can be avoided, it really could be the case that the sky is no limit.

Extremely absurd and incredibly fun

Randall Munroe book

In the 1993 Stephen Spielberg directed film Jurassic Park, scientists clone dinosaurs using prehistoric DNA, secured from ancient mosquitoes that have been trapped in amber for 65 million years. In the film, mathematician Dr Ian Malcolm, played by Jeff Goldblum, argues with the creator of the dinosaur-filled theme park, certain that unintended consequences will wind up biting them – literally. His concern is neatly summarized in his admonishment: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

It’s a good thing the fictional Dr Malcolm never met the real-world Randall Munroe, whose latest book is How To: Absurd Scientific Advice for Common Real-World Problems. Munroe, creator of the webcomic xkcd and the author of What If? and Thing Explainer, would no doubt consider the Jurassic Park scientists to be timid amateurs. How To provides clear and credible advice on such risky endeavours as “How to mail a package (from space)”, “How to power your house (on Mars)” and “How to build a lava moat (no emphasis needed)”.

Munroe has a degree in physics, and worked as a programmer and roboticist for NASA before becoming a full-time cartoonist with xkcd, and he puts his technical and artistic backgrounds to good use in How To. Each chapter poses a scenario, such as “how to cross a river”, that leads to some rather unconventional advice. Yes, you can walk or swim across a river if it is not too deep or wide, but have you considered inducing a phase change, so that the river is either frozen solid or boiled to steam? How much energy would such “solutions” require and how would one go about accomplishing it? The spirit of the book is summarized in a cartoon in the very first chapter, where two of xkcd’s stick figures converse: “I have an idea,” says one, and when the second asks: “Is it bad?” the first replies: “Extremely.”

While it is not a graphic novel, Munroe does not skimp on the illustrations,which are extremely helpful in clarifying the scientific arguments, often making a point more powerfully than if one used text alone (though many are there just for welcome comic effect). Nor does he shirk from providing physics formulae as he addresses his absurd situations. In this way the book functions as a creative physics (and engineering) primer, with plenty of examples for any physics instructor who is challenged by students who wonder when they will ever use this stuff in their “real life”. In the chapter “How to throw things”, where Munroe considers the (possibly authentic) tale of George Washington throwing a silver dollar across a large river, he concludes: “It’s remarkable that we can get even vaguely realistic answers about a complex physical action like ‘throwing’ using so few pieces of elementary physics.”

Similarly, the next chapter on “How to play football” leads to a discussion of mean free paths, as one tries to avoid those members of the opposing team who might interfere with your attempts to kick the ball into the goal. Which turns, naturally enough, to a discussion of using a horse to cross the pitch (Munroe helpfully points out that this is not strictly forbidden by FIFA’s rules), which morphs into a consideration of a scene from the Lord of the Rings film trilogy where a horseback rider traverses a sea of orcs. Munroe calculates the effective drag of the orcs, in order to determine how much energy (in units, of course, of horsepower) such a feat would require. The argument is so entertaining that you hardly notice that you are learning physics at the same time.

When he cannot answer his “how to” questions with back-of-the-envelope calculations, Munroe consults with experts, often to great effect. In the chapter “How to make an emergency landing”, he pitches a barrage of increasingly ridiculous scenarios to Colonel Chris Hadfield, a Royal Canadian Air Force fighter pilot, test pilot for the US Navy and NASA astronaut. No matter what Munroe asks (How to land on a farm? How to land on a ski jump? How to land on a train?), Hatfield provides thoughtful and realistic answers. Munroe muses: “In retrospect, my plan to fluster an astronaut by throwing extreme situations at him might have been flawed.”

The final full chapter, “How to dispose of this book”, illustrates the practical side of Munroe’s fanciful scenarios. Sure, you could burn the book or throw it in the ocean, but what if the book is indestructible or cursed, such that you could not destroy it no matter how much you wanted to be rid of it? Before you protest that this is nonsense, Munroe points out that this is exactly the conundrum faced with radioactive nuclear waste. What follows is a rather serious discussion of the various nuclear-waste disposal options available, from burying it in a very deep hole to flinging it into the Sun, weighing all the pros and cons.

Such a book and approach to our real-world problems is timely and valuable, for in order to address what we should, we must determine what we could.

  • 2019 Riverhead Books 320pp £16.99hb
Copyright © 2025 by IOP Publishing Ltd and individual contributors