Skip to main content

First winners of Bell Burnell fellowships announced by the Institute of Physics

Four female physics students have been awarded a new fellowship that seeks to promote diversity in physics. The Bell Burnell Graduate Scholarship Fund, awarded by the Institute of Physics (IOP), which publishes Physics World, aims to encourage diversity in physics by supporting students to do a PhD in the subject.

The new fund aims to help students from groups currently under-represented in physics, including female students, black and other minority ethnic students, people with refugee status and students from disadvantaged backgrounds. It was set up in 2018 after astrophysicist Jocelyn Bell Burnell — a former IOP President — donated £2.3m to the IOP. The cash was the money that Bell Burnell received after being awarded the 2019 Special Breakthrough Prize in Fundamental Physics for her role in the discovery of pulsars in the 1960s.

To study for a PhD requires considerable commitment, but some students also face additional challenges and barriers that require them to demonstrate even more resilience if they are to succeed

Rachel Youngman

The first awardees of the fund are astrophysics PhD students Joanna Sakowska from the University of Surrey and Tracy Garratt from the University of Hertfordshire as well as physicist Katarina Mamic from the University of Lancaster and medical physicist Kiri Newson from the University of Hull. The host university or institution will now pay at least 50% of the full costs of their doctoral programme with the scholarships providing support for course fees, living costs as well as any additional funding to support accessibility including support for carer responsibilities.

“These four talented and deserving students are embarking on exciting opportunities in physics research that might otherwise have been denied them,” says Rachel Youngman, deputy chief executive of the IOP. “To study for a PhD requires considerable commitment, and physics is certainly no exception, but some students also face additional challenges and barriers that require them to demonstrate even more resilience if they are to succeed.”

The awardees were chosen by a panel chaired by Helen Gleeson, a soft-matter physicist from the University of Leeds. “The competition was very tough, with a large number of eligible students who could benefit from sources of funding such as this,” says Gleeson. “The successful applicants are embarking on exciting research projects, and we are looking forward to them inspiring others in their ambassadorial role.”

X-ray ghost imaging goes energy-selective

Researchers in China have used an imaging approach known as X-ray ghost imaging (XGI) to obtain spectral images of an object using a single-pixel detector. This technique could find applications in a host of different fields, including biological and medical imaging, materials science and environmental sensing.

Unlike conventional cameras, ghost imaging does not directly capture an image of an object. Instead, it reconstructs the image from the correlations between the light that the object reflects or transmits and a series of “speckle patterns” used to illuminate it. In classical ghost imaging, these patterns are produced by two beams: one that encodes a random pattern that acts as a reference, but which doesn’t directly probe the sample, and another that passes through the sample. The two beams both acquire partial information about the object, but neither one alone can form a complete image.

Modulating mask

In their work, researchers led by Ling-An Wu and Li-Ming Chen of the Institute of Physics, Chinese Academy of Sciences, Beijing, began by modulating an X-ray beam using the patterns in a 2-inch square mask made of gold and placed about 45 cm from the X-ray source. They then passed the resulting structured beam through their sample, which was located about 1 cm away from the mask. Since the sample is so close to the mask, the pattern of X-rays projected through the sample is the same as the pattern in the mask itself – a prerequisite for ghost imaging in this work. This orientation also ensures that the resolution is practically the same as the 10 μm pixel size of the mask.

To block unwanted radiation, the researchers inserted a 3-mm-thick square aperture made of copper between the object and the detector. To collect their data, they placed a “bucket” spectrometer (comprising a sensor, signal amplifier, digital pulse processor and specialized software) 25 cm behind this aperture.

Wide range of different X-ray energies

The 5 × 5 mm2 sensor efficiently and simultaneously detects a wide range of X-ray energies (from 3 to 45 keV) as they are transmitted through the sample with a resolution of 1.5 keV, Wu and colleagues say. The key component of this single-pixel detector is a 1-mm-thick cadmium telluride diode that produces current pulses proportional to the energy of the incident X-ray photons. These analogue intensity signals are then converted to digital pulses of different heights. Next, the output signals are reshaped and amplified, before being processed to produce an energy spectrum.

After recording all this spectral information, the researchers built up X-ray ghost images of the sample by measuring the correlations between different intensities of light transmitted by the object through the mask. This is known as second-order correlation, and is a routine technique in ghost imaging.

The technique is the first demonstration of energy-selective X-ray ghost imaging, where spectral and intensity information can be obtained at the same time without affecting each other, Wu says. The X-ray source the Beijing group employed is also simpler than the synchrotron radiation sources used in previous studies. “Our source is a conventional table-top X-ray tube that emits polychromatic rays of a much lower intensity,” she explains. “The key component in our experiments is, in fact, our specially-fabricated etched gold modulation mask. With modern micromanufacturing techniques, we can etch most of the patterns in the mask as designed. We characterized ours using scanning electron microscopy, from which we confirmed that the pixel size was indeed 10 μm.”

Attractive for medical imaging

Since X-ray ghost imaging can be performed with much lower levels of radiation than traditional X-ray imaging, it should be attractive for medical imaging and analysing in vivo samples, she tells Physics World. “The technique acquires images according to the spectral fingerprint of the object, so different tissue layers can be distinguished more easily and without damage.”

Other application areas include analysing mineral ores and archaeological samples, she adds.

The Beijing team’s members, who report their work in Chinese Physics Letters (which is published jointly by IOP Publishing and the Chinese Physical Society), say they now plan to analyse real biological samples and other materials. “Analysing the X-ray signals from stellar objects is also another possibility,” Wu adds.

Piezoelectric feedback boosts infrared atomic force microscopy

A new closed-loop microscopy technique that can detect chemical compositions at the nanoscale and at high sensitivity has been unveiled by Rohit Bhargava and colleagues at the Beckman Institute for Advanced Science and Technology at the University of Illinois. Their design relies on a piezoelectric material that responds to the voltages produced when a sample is probed by a cantilever. Their approach could enable researchers to precisely measure the spectra of a wide range of nanomaterials, including molecular-scale biological samples.

Based on atomic force microscopy (AFM), AFM-IR involves firing an infrared (IR) laser at a sample, then measuring the resulting thermal expansion using the sharp cantilever tip of an AFM. This is a very useful measurement because the thermal response of a material is characteristic of its chemical composition.

AFM-IR is now widely used to study the spectra of nanomaterials, but still faces a major challenge in dealing with unknown sources of vibrational noise in the cantilever, which can overwhelm the signal. This had been addressed by placing samples on specialized substrates or using specialized sample preparation methods – but these solutions tend to limit the versatility of the technique.

AFM-IR diagram

 

Closed-loop system

In their study, Bhargava’s team addressed the noise issue by placing a piezoelectric material under the substrate, which changed shape when a voltage was applied to it. In this case, the voltage was determined by the movements of the cantilever. Therefore, the change in shape of the piezoelectric offset the expansion of the irradiated sample, creating a closed-loop system that eliminated the cantilever’s movement. This meant that the chemical composition of the sample could be measured by monitoring the voltage applied to the piezoelectric, while any noise-inducing vibrations were minimized.

With a standard, commercial AFM-IR instrument, Bhargava and colleagues used their closed-loop approach to measure the composition of a 100 nm-thick acrylic film placed onto a glass substrate. Then, they repeated the measurement using a gold substrate. In each case, their results agreed well with spectroscopic measurements made with other techniques. The team also created accurate maps of localized infrared absorption in a 4 nm-thick acrylic film when applied to silicon, with little contribution from effects outside the closed loop. Since glass and silicon are both popular choices of substrate for nanomaterials researchers, the demonstrated versatility of the approach could remove the need for strict preparation measures.

Bhargava’s team now hopes that the sensitivity of their closed-loop approach will allow AFM-IR to analyse a far more diverse range of materials, with far smaller volumes. If applied, it could allow researchers to probe materials including the complex mixtures present in cell membranes, and behaviours including the deformation of protein molecules.

The new technique is described in Nature Communications.

Finding a consistent constant

When astronomer Edwin Hubble realized that the universe is expanding, it was the greatest cosmological discovery of all time. The breakthrough acted as a springboard to learning about the age of the cosmos, the cosmic microwave background (CMB) radiation, and the Big Bang.

Hubble was the right man in the right place at the right time. In April 1920 his fellow astronomers Harlow Shapley and Heber D Curtis famously debated the size of the universe, and the nature of spiral nebulae, at the Smithsonian Museum of Natural History in the US. Within four years, Hubble had the answer for them.

Using the 100-inch (2.5 m) Hooker Telescope at Mount Wilson Observatory in California, Hubble was able to identify Cepheid variables – a type of star whose period-luminosity relationship allows accurate distance determination – in the spiral nebulae, which allowed him to measure them as extragalactic. The finding meant the Milky Way was not the entire universe, but just one galaxy among many.

Soon, Hubble had discovered that almost all of these galaxies are moving away from us – their light is redshifted via the Doppler effect. These observations were seized upon by the Belgian cosmologist Georges Lemaître, who realized that they implied that the universe is expanding. Independently, both Hubble and Lemaître derived a mathematical relationship to describe this expansion, subsequently known as the Hubble–Lemaître law. It says that the recession velocity (v) of a galaxy is equal to its distance (D) multiplied by the Hubble constant (H0), which describes the rate of expansion at the current time.

In the nearly 100 years since Hubble’s discovery, we’ve built up a detailed picture of how the universe has developed over time. Granted, there are still some niggles, such as the identities of dark matter and dark energy, but our understanding reached its peak in 2013 with the results from the Planck mission of the European Space Agency (ESA).

Launched in 2009, Planck used microwave detectors to measure the anisotropies in the CMB – the slight temperature variations corresponding to small differences in the density of matter just 379,000 years after the Big Bang. The mission revealed that only 4.9% of the universe is formed of ordinary baryonic matter. Of the rest, 26.8% is dark matter and 68.3% is dark energy. From Planck’s observations, scientists also deduced that the universe is 13.8 billion years old, confirmed that the universe is flat, and showed how baryonic acoustic oscillations (BAOs) – sound waves rippling through the plasma of the very early universe, resulting in the anisotropies – neatly matched the large-scale structure of matter in the modern universe, which we know has expanded according to the Hubble–Lemaître law.

Planck 2018 CMB map

Planck’s view of the CMB is the most detailed to date. Plus, by mapping our best cosmological models to fit Planck’s observations, a whole host of cosmological parameters have come tumbling out, including H0. In fact, scientists were able to extrapolate the value of H0 to the greatest precision ever, finding it to be 67.4 km/s/Mpc, measured to an uncertainty of less than 1%. In other words, every stretch of space a million parsecs (3.26 million light-years) wide is expanding by a further 67.4 km every second.

With Planck’s results, we thought that this picture was complete, and that we knew exactly how the universe had expanded over those 13.8 billion years. Yet it turns out that we may be wrong (see box “Not a constant constant over time”).

Not a constant constant over time

cosmic distance ladder

Describing H0 as “the Hubble constant” is a little bit of a misnomer. It is true that it is a constant at any given point in time since it describes the current expansion. However, the expansion rate itself has changed throughout cosmic history – H0 is just the current value of a broader quantity that we call the Hubble parameter, H, which describes the expansion rate at other times. Therefore, while the local measurement, H0, should only have one value, H can take on different values at different points in time. Prior to the discovery of dark energy, it was assumed that the expansion of the universe was slowing down, and hence H would be larger in the past than in the future. On the face of it, since dark energy is accelerating the expansion, you might therefore expect H to increase with time, but that’s not necessarily the case. Rearranging the Hubble–Lemaître law such that H = v/D, where v is an object’s recession velocity, we see that H depends strongly on distance D, since in a universe with accelerating expansion, the distance is increasing at an exponential rate. The bottom line is that H is likely to be decreasing with time, which is what we think is happening, even though the recession velocity and hence the expansion are still increasing with distance.

Moving parts

Traditionally, H0 is determined by measuring the distance and recession velocity of galaxies using “local” standard candles within those galaxies. These include type 1a supernovae – the explosion of white-dwarf stars with a certain critical mass – and Cepheid variable stars. The latter have a robust period–luminosity relationship (discovered by Henrietta Swan Leavitt in 1908) whereby the longer the period of the star’s variability as it pulses, the intrinsically brighter the star is at peak luminosity. But as astrophysicist Stephen Feeney of University College London explains, these standard candles have many “moving parts”, including properties such as stellar metallicity, differing populations at different redshifts, and the mechanics of Cepheid variables and type Ia supernovae. All the moving parts lead to uncertainties that limit the accuracy of these observations and we have seen over the years how the resulting calculations of H0 can vary quite a bit.

cepheid variable

On the other hand, Planck’s value of H0 is a relatively straightforward measurement – even if it does depend on the assumption that lambda cold dark matter (ΛCDM) cosmology – which incorporates the repulsive force of dark energy (Λ) and the attractive gravitational force of cold dark matter (CDM) – is correct. Still, when all the known uncertainties and sources of error are taken into account, Planck’s value of H0 is to a precision greater than ever before, to just 1%.

Although the Planck value of H0 is calculated from measurements of the CMB, it is important to note that it is not the expansion rate at the time of the CMB’s creation. Rather, “think of the CMB measurement as a prediction instead”, says Feeney. It extrapolates from what the CMB tells us the universe was like 379,000 years after the Big Bang, factoring in what we know of how the universe would have expanded in the time since based on the Hubble–Lemaître law and ΛCDM, to arrive at an estimate of how fast the universe should be expanding today. In other words, however we measure H0, whether it be with the CMB or more local measurements of Cepheid variables and supernovae, we should get the same answer.

A spanner in the works

In 2013, when the Planck measurement was revealed, this wasn’t a problem. Although local measurements differed from the Planck value, their uncertainties were still large enough to accommodate the differences. The expectation was that as the uncertainties become smaller over time with more sophisticated measurements, the local measured value would converge on the Planck value.

However, in 2016 a major milestone was reached in local measurements of H0 that brought our understanding of the universe into question. It involved Adam Riess of Johns Hopkins University, US, who in 1998 had co-discovered dark energy using type Ia supernovae as standard candles to measure the distance to receding galaxies based on how bright the supernovae appeared. He was now leading the cumbersomely named SH0ES (Supernova, H0, for the Equation of State for dark energy) project, which was set up to calibrate type Ia supernovae measurements to determine H0 and the behaviour of dark energy. In order to provide this calibration, the SH0ES team used a lower rung on the “cosmic distance ladder”, namely Cepheid variables. The project aimed to identify these pulsating stars in nearby galaxies that also had type Ia supernovae, which would mean that the Cepheid distance measurement could then be used to calibrate the supernova distance measurement, and this new calibration could in turn be used on supernovae in more distant galaxies. The method produced a value of H0 with an uncertainty of just 2.4%.

Gaia SNIa artist impression

Riess and his group were stunned by their result, however. Using the local cosmic distance ladder, they inferred a value of H0 as 73.2 km/s/Mpc. With the vastly reduced uncertainties, there was no way this could be reconciled with Planck’s measurement of 67.4 km/s/Mpc. If these results are correct, then there is something deeply wrong with our knowledge of how the universe works. As Sherry Suyu of the Max Planck Institute for Astrophysics in Garching, Germany, says, “We may need new physics.”

Rather than assume Planck’s high-precision H0 is wrong and therefore rush to take apart the fundamentals of our best cosmological models, a scientist’s first instinct should be to test whether there has been an experimental mistake that means something about our measurement of the cosmic distance ladder is awry.

“That’s where my instinct still lies,” admits Daniel Mortlock of Imperial College London, UK, and Stockholm University, Sweden. Mortlock works in the field of astro-statistics – that is, drawing conclusions from incomplete astrophysical data, and accounting for various kinds of uncertainty and error in the data. It’s worth remembering there are two types of error in any measurement. The first is statistical error – errors in individual measurements, for example read-out noise from the detector, or uncertainties in the sky background brightness. Statistical errors can be reduced by simply increasing your sample size. But the other kind of errors – systematic errors – aren’t like that. “It doesn’t matter if you have a sample five times, or 10 times, or 50 times as large, you just get this irreducible uncertainty,” says Mortlock. An example of a systematic error might be the reddening of a star’s light by intervening interstellar dust – no matter how often you measure the star’s brightness, the star’s light would always be obscured by the dust, and its effect would increase the more you measure it.

Mortlock has considered that this could be what is happening with the local measurements of H0 – there may be some systematic error that astronomers have not yet identified, and if found, the tension between the Planck measurement and the local measurement of H0 could go away. However, Mortlock acknowledges that the evidence for the discrepancy in the H0 values being real “has been steadily growing more convincing”.

Einstein to the rescue

Presented with such an exceptional result, astronomers are double-checking by measuring H0 with other, independent means that would not be subject to the same systematic errors as the Cepheid variable and type Ia supernovae measurements.

One of these can be traced back to 1964, when a young astrophysicist by the name of Sjur Refsdal at the University of Oslo, Norway, came up with a unique way to measure the Hubble constant. It involved using a phenomenon predicted by Einstein but which at the time had not been discovered: gravitational lenses.

The general theory of relativity describes how mass warps space, and the greater the mass, the more space is warped. In the case of so-called “strong gravitational lensing”, massive objects such as galaxies, or clusters of galaxies, are able to warp space enough that the path of light from galaxies beyond is bent, just like in a glass lens. Given the uneven distribution of mass in galaxies and galaxy clusters, this lensing can result in several light paths, each of slightly different length.

Refsdal realized that if a supernova’s light passes through a gravitational lens, then its change in brightness would be delayed by different amounts in each of the lensed images depending on the length of their light path. So, image A might be seen to brighten first, followed a few days later by image B, and so on. The time delay would tell astronomers the difference in the length of the light paths, and the expansion of space during those time delays would therefore allow H0 to be measured.

gravitational lensing

Unfortunately, even after the first gravitational lens was discovered in 1979, it turned out that gravitationally lensed supernovae are exceptionally rare. Instead, quasars – luminous active galactic nuclei that also exhibit brightness variations – have been found to be the more common lensed object. That was what led Suyu to launch a project in 2016 to study lensed images of quasars in order to provide an independent measure of H0. It goes under the even more cumbersome name of H0LiCOW, which stands for H0 Lenses in COSMOGRAIL’s Wellspring, where COSMOGRAIL refers to a programme called the COSmological MOnitoring of GRAvitational Lenses, led by Frédéric Courbin and Georges Meylan at the École Polytechnique Fédérale de Lausanne.

Throughout this analysis, Suyu and team kept the final result hidden from themselves – a technique known as blind data analysis – to avoid confirmation bias. It was only at the very end of the process, once they had completed all their data analysis and with their paper describing their observations almost completely written, that they revealed to themselves the value of H0 that they had measured. Would it come out in favour of Planck, or would it boost the controversial SH0ES result?

The value that they got was 73.3 km/s/Mpc, with an uncertainty of 2.4%. “Our unblinded result agrees very well with the SH0ES measurement, adding further evidence that there seems to be something going on,” Suyu tells Physics World.

It’s still too early to claim the matter settled, however. The initial H0LiCOW analysis involved only six lensed quasars, and efforts are being made to increase the sample size. Suyu is also returning to Refsdal’s original idea of using lensed supernovae, the first example of which was discovered by the Hubble Space Telescope in 2014, followed by a second in 2016. Hundreds are expected to be discovered by the Vera C Rubin Observatory in Chile, formerly known as the Large Synoptic Survey Telescope, which will begin scientific observations in October 2022.

“It would be really interesting if the H0LiCOW measurements can be shown to be correct and in agreement with the Cepheid measurements,” says Feeney, who for his part is pursuing another independent measurement of H0, using another phenomenon predicted by Einstein: gravitational waves.

On 17 August 2017 a burst of gravitational waves resulting from the collision of two neutron stars in a galaxy 140 million light-years away triggered the detectors at the Laser Interferometry Gravitational-wave Observatory (LIGO) in the US and the Virgo detector in Italy. That detection has allowed Feeney and a team of other astronomers including Mortlock and Hiranya Peiris, also of University College London, to revive an idea, originally proposed by Bernard Schutz in 1986, to use such events to measure the expansion rate of the universe.

The strength of the gravitational waves indicates how distant a neutron-star merger is, but the merger also produces a handy burst of light known as a kilonova. This can be used to pinpoint the host galaxy, the redshift of which in turn provides the galaxy’s recession velocity. Feeney and Peiris have estimated that a sample of 50 kilonovae would be required to derive an accurate determination of H0, but Kenta Hotokezaka of Princeton University, US, and colleagues have found a way to speed this up. They point out that we would see the gravitational waves from a neutron-star merger at their strongest if we looked perpendicular to the plane of the collision. The merger produces a relativistic jet that’s also moving perpendicular to the plane, so measuring the angle at which we see the jet will tell us how large our viewing angle is from the plane, and therefore allow a determination of the true strength of the gravitational waves and hence the distance. Hotokezaka estimates that a sample size of just 15 kilonovae studied in this fashion will be enough to provide an accurate assessment of H0. Unfortunately, so far astronomers have observed just one kilonova, and based on that sample of one, Hotokezaka calculates H0 to be 70.3 km/s/Mpc, with a big uncertainty of 10%.

Other approaches abound

Beta ceti

Cepheid variables and type 1a supernovae are common rungs on the cosmic distance ladder that are used to find a local value for Hubble’s constant. But researchers led by Wendy Freedman of the University of Chicago have used another. They looked at the brightness of red giant stars that have begun fusing helium in their cores, to measure the distance to galaxies in which these stars can be seen. Initially they calculated H0 as 69.8 km/s/Mpc – but then things got complicated. Their data was reanalysed by Wenlong Yuan and Adam Riess to account for dust reddening, resulting in a revised measurement of 72.4 km/s/Mpc with an uncertainty of 1.45%. However, Freedman’s team have performed the same reanalysis and got a value of 69.6 km/s/Mpc, with an uncertainty of 1.4%, so the jury is still out on that one.

Meanwhile, the Megamaser Cosmology Project makes use of radio observations tracking water masers in gas orbiting supermassive black holes at the centres of distant galaxies. The angular distance that the masers traverse on the sky allows for a straightforward geometrical distance measurement to their host galaxy, producing a value of 73.9 km/s/Mpc for H0 with an uncertainty of 3%.

Changing the game

Taken together, all the available evidence seems to be pointing towards the dichotomy between the local measurements and the Planck measurement being real – and not some unidentified systematic error. However, a greater sample size is required before these results can be considered truly robust. With new observatories coming online, we could have the required observations within 10 years.

“If either the gravitational waves or the lensing methods give very strong results that match the SH0ES result, then I think that will change the game,” says Mortlock.

However, don’t expect a speedy resolution. After all, we’re still grappling with the nature of dark matter and dark energy, and current research is focused on trying to identify the dark-matter particle and trying to characterize the behaviour of dark energy. “Whatever is going on with the Hubble constant is still several steps behind those,” says Mortlock. “People are still debating whether the effect is real.”

The value of H0 will have many consequences. It will dictate the age of the universe, and the history of how the universe expanded and allowed large-scale structure to form

One way or another, figuring out whether the discrepancy in our measurements of H0 is real or not will have significant repercussions for cosmology. Feeney describes the local and Planck values of H0 as “a really potent combination of measurements, because you are constraining the universe now and how it looked 13.8 billion years ago, so you pin down the universe at both ends of its evolution”.

The value of H0 will have many consequences. It will dictate the age of the universe (a higher H0 would mean the universe might be substantially younger than 13.8 billion years, which would contradict the age of some of the oldest stars that we know). It would also influence the history of how the universe expanded and allowed large-scale structure to form. And if new physics is needed, as Suyu suggests, then it’s impossible as yet to say how dramatic an effect that would have on cosmology, since we don’t yet know what shape that new physics might take.

“It would be something in addition to our current ΛCDM model,” says Suyu. “Maybe we are missing some new, very light and relativistic particle, which would change Planck’s measurement of H0. Or it could be some form of early dark energy that’s not in our current model.”

Or it could be neither of these, but instead something we’ve not thought of yet. The prospect is tantalizing researchers, but Suyu warns about jumping the gun.

“First we need to get the uncertainties down to the 1% level on multiple methods to see if this tension is real,” she says. So we need to be a little bit patient, but if we come back in 10 years’ time, we may find that the universe is suddenly a very different place indeed.

Tabletop device might snare gravitational waves using tiny diamonds

Rather than the kilometre-length observatories of today, future gravitational-wave detectors could be just a few metres long. That is the goal of physicists in the UK and the Netherlands, who have put forward a design for a matter-wave interferometer that would rely on the superposition of tiny objects such as diamond crystals rather than laser beams. They say that the device would be sensitive to low- and mid-frequency gravitational waves.

Gravitational waves were first observed directly in 2015, when the LIGO observatory in the US picked up the emission from a pair of merging black holes. These black holes broadcast a series of ripples through space-time that caused the pairs of perpendicular arms making up LIGO’s interferometers to undergo a series of miniscule expansions and contractions. Those tiny changes were registered as variations in the interference between laser beams sent along the arms.

Such laser-based observatories, however, are very large. A passing gravitational wave will typically induce fractional length changes on the order of 10-19 or less, meaning that the detector’s arms must be several kilometres long if the facility is to yield a reasonable signal above the many sources of background noise. In the case of LIGO, each arm extends for 4 km.

Tiny wavelength

The latest work proposes a far smaller type of observatory based on interfering beams of matter rather than light. The particles in question would have a mass of about 10-17 kg, corresponding to a de Broglie wavelength of 10-17 m. This is about 100 billion times smaller than the wavelength of laser light used in existing observatories and could be exploited in an interferometer measuring as little as 1 m in length.

The scheme has been put forward by Sougato Bose, Ryan Marshman and colleagues at University College London, together with researchers at the universities of Groningen and Warwick. It involves a Stern-Gerlach interferometer and nanometre-sized crystals containing embedded spins. Although several types of crystal could do the job, the researchers suggest diamond containing a nitrogen-vacancy centre spin – a system already used to make spin-qubit quantum computers.

The device has yet to be built, but it would involve trapping, uncharging and cooling the crystals before using microwaves to place their spins in a superposition of spin-0 and spin-1. Released from the trap and exposed to a suitable magnetic-field gradient, the two spin states would then separate out in space so that the spin-0 component travels forward horizontally while the spin-1 part follows a parabolic trajectory. After a certain distance, the two spin states would meet up again.

Spin-separated states

Bose and colleagues originally developed this type of interferometry to make very precise measurements of gravitational acceleration to study the quantum character of gravity. The idea is that the spin-separated states experience different accelerations as they follow different paths through the gravitational field. This results in a phase difference between them at the far end, which can be measured by counting the relative abundance of spin states over a given number of runs.

However, the researchers realized that the device could in principle be made sensitive enough to also detect gravitational waves. In this case, a wave changes the spatial separation of the two paths as it passes through the apparatus – resulting in a sinusoidal oscillation of the spin states’ phase difference.

Bose and colleagues say that their device would have a number of significant advantages compared to laser interferometers. Because the phase difference accumulates only while the crystals are traversing the interferometer, the output signal would be independent of any thermal, seismic or other noise that occurs before the particles are placed in a superposition. What is more, the absence of laser-based position measurements removes radiation pressure noise, while exact knowledge about the number of nanoparticles in the interferometer avoids shot noise.

Underground or in space

The researchers say that their interferometer – with perhaps several copies operating in parallel – would be most sensitive to relatively low-frequency gravitational waves. Located underground, they say it could cover part of the range to be targeted by the LISA space-based observatory – about 10-6  Hz-10 Hz. If operated in space, it should be able to cover all LISA’s proposed territory.

Shimon Kolkowitz of the University of Wisconsin-Madison in the US points out that many other proposals based on matter-wave interferometry use atoms to detect laser phase rather than gravitational waves themselves. The new approach is therefore more direct, he says, but also riskier. “[It is] based off of a technology that has yet to be realized and that will require major technical breakthroughs,” he says. “But it could be quite impactful.”

Bose says that he and his colleagues are confident they can overcome all the technical hurdles. In particular, they are looking to create a large magnetic-field gradient without having to generate a particularly high field – by sequentially turning on a series of flat carbon nanotubes in a stepped formation to approximate a particle’s parabolic trajectory. Looking to build a small-scale prototype within a decade, he “optimistically” estimates that the cost of reliable solid-state qubits and a large-scale atomic interferometer could each run into the tens of millions.

A paper describing the work will be published in the New Journal of Physics and is available as an accepted manuscript.

The e-bike revolution

Ortis Deley with a Cytronex-adjusted bike

We’re finally, cautiously, emerging from the global lockdown designed to halt the spread of COVID-19. It will be interesting to see what real and lasting changes the pandemic brings, but as I mentioned last month, I suspect our travel and transport habits will never return to pre-lockdown ways. The UK government already said in May that it would invest in “pop-up” bike lanes with protected space for cycling as well as wider pavements and safer junctions. Cycle and bus-only corridors were also due to be created in England “within weeks” as part of a £250m emergency travel fund.

As UK transport secretary Grant Shapps put it, social-distancing requirements mean that even if public transport reverts to a full service, there will be an effective capacity for only one in 10 passengers on many parts of the network. ”Getting Britain moving again is going to require many of us to think carefully about how and when we travel,” he said. Shapps claimed that some parts of the country had already seen a 70% rise in the number of people cycling to work or the shops, adding that “when the country does get back to work, we need those people to carry on cycling and walking, and to be joined by many more”.

There’s also been good news for fans of motorized scooters, or “e-scooters”, which are basically two-wheel scooters with an electric motor added. Currently illegal in many countries, a trial e-scooter rental scheme is set to begin in Birmingham this month. It will let the government explore the benefits of e-scooters and possibly make them – and other novel forms of transport, like Segways – legal on British roads. It won’t be simple: a similar trial in San Francisco led to angry “e-scooter wars” between residents and commuters riding these devices illegally on pavements.

Wheel of fortune

I already own an Inmotion V3 self-balancing electric unicycle, which has a gyrostabilized wheel and looks a bit like a Segway. Problem is, you’re not meant to use one of these on the road or pavement, only on private land. During the lockdown, it’s therefore been gathering dust in my garage and instead I’ve preferred my old-fashioned pedal bike, which I can use legally on the road and keeps me at a safe distance from other people. But are ordinary bikes a realistic way forward – especially for those who have to travel long distances or up steep hills?

The future, I think, lies in electric bikes. Sales of e-bikes are rising, currently totalling about 50,000–60,000 a year in the UK. That figure’s dwarfed by conventional bike sales of more than three million, but the evidence points to serious e-bike growth over the next 30 years, according to the car-part and bike retailer Halfords. But, wow, these dedicated e-bikes are expensive. Bosch will sell you one for £2000, while its top-of-the range model with full suspension will set you back more than £6500. I’d be nervous about leaving one of those locked up anywhere on the street.

With a battery fully integrated into the bike’s frame, they’re heavy too. I suspect there will be a limit to how many customers will want to pay for something that costs more than a decent used car. The high price was also one of the problems that stalled sales of Segways in the early 2000s and why other firms stole a march with other self-balancing scooters that had cheaper and better designs. As Lotus cars founder Colin Chapman famously said, the trick is to “simplify, then add lightness”.

I bought a simple piece of kit that I retrofitted to my existing pedal bike

That’s why I bought a simple piece of kit that I retrofitted to my existing pedal bike, turning it into an e-bike at much lower cost. I’d first seen the device on TV’s The Gadget Show a few years ago, watching in amazement as presenter Ortis Deley fitted his ordinary bike with a C1 lithium-battery-powered kit from the UK firm Cytronex and then raced British hill-climbing champion Dan Evans over a 7 km uphill course. The combined battery and motor controller are disguised as a water bottle and attached to the bike frame, giving Deley an additional 250 W of power via a motor fitted to the hub of the front wheel. Although Deley didn’t beat Evans, it was very close (he might have won if UK regulations didn’t prevent an e-bike going faster than 25 km/h).

Uphill challenge

I’ve had a lot of fun with my new e-bike conversion. I’d previously tried cycling to work with my ordinary bike but the uphill sections were a killer and I always ended up a soggy mess at the office. But now I found myself passing local, hardcore Lycra-clad cyclists up hills in my business suit without even breaking sweat. And thanks to the water-bottle kit, no-one knows you’re on an e-bike. Even better, when I’ve reached my destination, I can simply lock my bike, unclip the “bottle” and take it with me.

The Cytronex kit gives me roughly 40 km of assistance range, helping me to build up my fitness – I save the joules for the hills or when I’m tired. The kit and motor, which add just 3.6 kg to my bike, couldn’t be easier to use. In fact, its inventor Mark Searles recently fitted one to a Brompton folding commuter bike and then attempted the dreaded 22 km-long climb of Mont Ventoux in southern France. Rising vertically by 1600 m, it’s a regular part of the Tour de France, but Searles completed it in just 84 minutes – on a par with the best riders in the world.

Surely that makes the Cytronex-adapted Brompton the ultimate lightweight environmentally friendly commuter vehicle. And with potentially quieter roads once the COVID-19 pandemic is over, what better way to keep fit, get to work or go shopping? As with many things, the simplest solutions are sometimes the best.

Stiffer road surfaces could cut greenhouse-gas emissions

The efficiency of the US transportation sector could be improved significantly by increasing the stiffness of road surfaces, US researchers have shown. Randolph Kirchain and colleagues at the Massachusetts Institute of Technology came to this conclusion through a detailed analysis of the road networks in each US state. Their findings could lead to a meaningful dent in the greenhouse gas emissions produced by US transportation, without the need for costly new technologies.

Although road surfaces may seem completely rigid when we walk on them, it is a different story for large vehicles. As they drive, the wheels of large vehicles compress and elastically deform road surfaces, creating temporary “valleys” from which they must continually escape. Even when driving on flat surfaces, these vehicles are always driving slightly uphill. Kirchain’s team calculates that this is results in an excess fuel consumption of over 2.5 billion tons across a 50-year period.

The researchers suggest that this problem could be alleviated by simply making roads more rigid – reducing the deformation produced by heavy vehicles. They examined several methods to achieving this goal: in one approach, they found that road stiffness could be increased by up to 93% by incorporating carbon nanotubes into various construction materials, at a proportion of just 0.1% by weight. Improvements could also be made by adjusting the sizes of the grains used in concrete mixtures, which would increase their densities. Yet perhaps their simplest method was to replace existing asphalt roads with more expensive, yet stiffer and more durable concrete.

Southern opportunities

Kirchain’s team then performed a state-by-state analysis of the current rigidity of US roads, accounting for factors including climate, road length and usage, and the properties of construction materials. They found that the potential to offset excess fuel consumption was highest in the southern states – whose roads are primarily made from asphalt, which is deformed particularly easily in their warmer climates.

The researchers calculated that by resurfacing 10% of the road network each year with the stiffer materials, the US could eliminate 18% of the greenhouse gas emissions associated with road deformation across a 50-year period. This corresponds to a 0.5% reduction in emissions across the entire US transportation sector.

The results present a robust basis for improving the efficiency of the US road network, with no need for technological innovations, novel construction materials, or unfamiliar manufacturing processes. In future work, Kirchain and colleagues will explore how further aspects could affect vehicle efficiency; including road roughness and reflectiveness, and the emissions associated with material production and demolition.

The work is described in Transportation Research Record.

Transabdominal oximetry offers non-invasive monitor of foetal health

Researchers in the US have designed and tested a novel oximetry system that offers a non-invasive approach for monitoring foetal blood-oxygen levels, an indicator of foetal wellbeing, during active labour. A more objective and accurate measure of foetal blood oxygenation could improve both mother and infant birth outcomes.

If left unaddressed, foetal asphyxia, or oxygen deprivation, can cause long-term disabilities, developmental delays or even death. Therefore, any indication of foetal asphyxia during active labour commonly prompts an emergency Caesarean delivery, or C-section.

The current method of evaluating foetal wellbeing, known as cardiotocography, attempts to indirectly assess foetal oxygen saturation by monitoring the relationship between uterine contractions and foetal heart rate over time. Despite its widespread use for over 50 years, cardiotocography has failed to decrease the rates of complications associated with foetal asphyxia. Instead, it has contributed to increasing rates of C-section, which itself poses dangers to both mother and baby.

A multidisciplinary team of researchers at the University of California, Davis has developed a new transabdominal foetal pulse oximetry system that directly assesses foetal oxygen saturation. The team recently published a detailed description and assessment of the system in IEEE Transactions on Biomedical Engineering.

Pulse oximetry

Pulse oximetry is a non-invasive method used to measure blood-oxygen saturation, similar to the technology commonly employed in smart watches to monitor heart rate.

Haemoglobin, the protein in blood that carries oxygen, absorbs distinct wavelengths of light differently depending on whether it is loaded with or lacking oxygen. Pulse oximetry leverages this characteristic by using a pair of light-emitting diodes, or LEDs, to send a known light signal into the body, and a detector to collect the light that is reflected back to the skin surface.

Standard oximetry systems use LEDs of specific wavelengths such that one is selectively absorbed by oxygen-lacking haemoglobin and the other by oxygen-loaded haemoglobin – commonly 660 nm and 940 nm, respectively. The ratio of the light signals of each wavelength that reach the detector indicates the relative blood-oxygen saturation.

Transabdominal foetal pulse oximetry

Adapting this well-established method to measure foetal blood-oxygen saturation in utero posed interesting obstacles for Daniel Fong and colleagues to overcome.

First, the light must travel deeper into the body to reach the foetus than is possible with conventional oximetry systems. The researchers cleared this hurdle by optimizing the wavelength selection to reduce scattering in the tissue and allow the light to travel further while maintaining a detectable intensity. The team identified the optimal LED wavelengths as 740 nm and 850 nm.

Second, because the light must pass through the mother to reach the foetus, the resulting signal is a combination of maternal and foetal interactions. To address this, the investigators came up with an innovative design involving an additional detector. This extra detector is placed nearer to the LEDs than the primary detector, resulting in a shallow depth of measurement to supply the maternal signal alone. With this information, the software can filter out the maternal portion of the mixed signal and extract the foetal signal for assessment.

The researchers tested their new system on a pregnant ewe. Results from their transabdominal foetal pulse oximetry system agreed with gold-standard, but invasive, measurements of foetal arterial blood gases. Next, they plan to further characterize the system’s performance under various conditions, like active labour, with the ultimate goal of improving the “health and safety of mothers and babies during labour and delivery”, via the start-up spinoff Storx Technologies.

Cryogenics makes the European Spallation Source a hot property in neutron science

Large-scale neutron facilities – one of the mainstays of Europe’s “big science” infrastructure – are routinely used by researchers to understand material properties on the atomic scale, spurring advances across a spectrum of scientific discovery – from clean energy and environmental technology to pharma and healthcare, from structural biology and nanotech to food science and cultural heritage. Industry users, meanwhile, use neutrons to probe deep into engineering components, gaining unique insights into the stresses and strains that affect turbine blades, gas pipelines, fuel cells and the like.

Big science, of course, keeps thinking bigger – and neutron science is no exception. For the neutron user community, in fact, a decade of revolution rather than evolution is hoving into view as construction progresses on the European Spallation Source (ESS), a €1.84bn accelerator-driven neutron source in Lund, Sweden. When it comes online for user experiments in 2023, the ESS will be the world’s most powerful neutron source – between 20 and 100 times brighter than the Institut Laue Langevin (ILL) in Grenoble, France, and up to five times more powerful than the Spallation Neutron Source (SNS) in Oak Ridge, Tennessee, US.

John G Weisend II

That big leap forward represents an industrial-scale undertaking, an amalgam of the most powerful linear proton accelerator ever built; a two-tonne, rotating tungsten target wheel (which produces neutrons via the spallation process); 22 state-of-the-art neutron instruments for user experiments; and a high-performance data management and software development centre (located in Copenhagen). John G Weisend II, group leader for specialized technical services at ESS, explains how liquid-helium cryogenic technologies – which enable production and maintenance of temperatures as low as 2 K – are equally fundamental to the ESS’s long-term scientific success.

What are the main building blocks of the ESS cryogenics programme?

There are three principal applications of helium cryogenics within the ESS. The proton linac’s superconducting RF cryomodules (responsible for the bulk of the particle acceleration) require cooling at 2 K, 4.5 K and 40 K, while the hydrogen moderator (surrounding the tungsten target that produces the neutrons) requires cooling via 16.5 K supercritical helium.

Elsewhere, many of the facility’s scientific instruments will rely on liquid helium to cool hardware like superconducting magnets or to provide low-temperature environments around the instruments and sample chambers themselves.

To meet these needs, we specified a system design with three distinct cryogenic refrigeration plants sharing a largely common recovery, purification and storage system (see “The headline take on ESS” below). After an open tendering process, Linde emerged as our chosen supplier for the accelerator cryoplant and target moderator cryoplant, with Air Liquide contracted to supply the test and instruments cryoplant as well as the recovery and purification system.

How did you approach that engagement with industry?

One of the challenges with a cryogenic system of this scale is the lead time – more than five years from the initial feasibility studies and the writing of the detailed technical specifications through to the on-site installation and commissioning at ESS. As such, it’s vital to get under way early with the cryoplant design studies and equipment procurement. Although the three ESS cryoplants are by no means off-the-shelf solutions, we were careful not to be overly innovative in terms of our requirements-gathering. Instead, the emphasis was on capital cost, best-in-class reliability and minimizing operating expenditure – chiefly through energy efficiency.

Cold boxes

That commitment to sustainability is a unique aspect of the ESS project. What does this mean for the cryogenics programme?

One aspect of the ESS sustainability plan is to recover as much waste heat as possible – at least 50% – and supply it into the domestic hot-water heating system in the Lund metro area. Across all our cryogenics systems, we are recovering the heat deposited in the oil and helium coolers of the warm compressors as well as in the compressor motors.

Does the emphasis on sustainability extend to helium recovery and storage?

Helium is a finite resource and an industrial gas that’s subject to significant price fluctuations. With this in mind, all of our cryogenic operating cycles are closed, designed to limit the venting of helium into the atmosphere to a few, rare failure modes. Over the next five years, as we commission and put the ESS cryoplant through its paces, we expect to replace approximately 50% of our helium inventory each year. Beyond that, we aim to get down below 25% replacement levels per annum.

Many ESS project partners make “in-kind” contributions of equipment and personnel rather than direct cash investments from the member countries. How does this work in terms of the cryogenics programme?

A good example is the cryogenic distribution system, which connects the accelerator cryoplant to the cryomodules in the accelerator tunnel and, uniquely, is set up in such a way that we can independently cool down or warm up any of the cryomodules while keeping the others at their operating temperature. This capability allows us to repair a specific cryomodule in situ while keeping the others at cryogenic temperatures – the aim being to reduce downtime and improve the availability of ESS for the scientific users.

Unlike the rest of the cryogenic infrastructure, the distribution system is provided by in-kind partners, with the bulk of the work carried out by Wroclaw University of Science and Technology in Poland and its local industry partner Kriosystem; the remainder is handled by the Laboratoire de Physique des 2 Infinis Irène Joliot-Curie in France and its industry partner Cryo Diffusion. It’s worth noting that these relationships are very much in the spirit of scientific collaboration rather than contractor/subcontractor arrangements.

Tunnel KrioSystem

How has the ESS construction project been impacted by the coronavirus pandemic?

While most ESS staff are working from home, civil construction proceeds almost as normal and a significant amount of installation of technical infrastructure is still taking place on site. However, ESS depends on deliveries and interaction with our in-kind partners and suppliers, and we see that the supply chain has been affected by the pandemic.

In terms of the cryogenics deployment, current status is that the test and instruments cryoplant is fully commissioned and turned over to us, as is the target moderator cryoplant. The accelerator cryoplant has been commissioned to its highest-capacity cooling at 2 K. Working with Linde, we expect to complete the commissioning this summer. The last part of the cryogenic distribution system has been delayed, but we do expect to commission it in early 2021.

What lessons can other big-science projects learn from your experience at ESS?

We’ve done a lot of things right on the ESS cryogenics programme. Big science is all about collaboration, so the number-one priority, from the off, is to have tight communication with your industry vendors and in-kind project partners. Communication needs to work consistently well at all levels – from the director right down to the project engineers at the sharp end. We also carved out time on personnel and recruitment, building a world-class, 10-strong team of engineers, scientists, designers and technicians to manage and work with a diverse group of stakeholders.

On the flip side, one area to guard against is over-reach. A case in point is a project that we took on in-house and underestimated – building the warm-gas connections between the three cryoplants and cryogen storage systems. While we wound up getting the work done on time, it probably would have been prudent – and less stressful – to allocate more resource earlier. We share all of these lessons via regular conference presentations and exchange visits with our counterparts from other large-scale facilities.

What’s next for you and the ESS cryogenics team?

The pace is relentless and likely to be even more so once we get beyond the pandemic. Commissioning of the accelerator cryoplant and the cryogenic distribution system should be completed in 2020, while installation of the cryomodules in the linac tunnel should start at the end of the year and continue through 2021. The first cooldown of the ESS linac is planned for 2022.

The European Spallation Source: an overview

TCCP hall

Fundamental principles

  • At the heart of the ESS is a linear accelerator that produces up to a 5 MW beam of 2 GeV protons, with the bulk of the acceleration generated by more than 100 superconducting RF cavities.
  • These protons strike a rotating tungsten target wheel to produce a beam of neutrons via a process known as nuclear spallation (i.e. the impact on the tungsten nuclei effectively “spalls” off free neutrons).
  • The resulting neutrons pass through a supercritical hydrogen moderator (at about 17 K), slowing them to useful energies before distribution to a suite of 22 neutron-science instruments.

Cryogenic requirements

  • All the superconducting RF cavities in the proton linac operate in saturated 2 K liquid-helium baths.
  • The supercritical hydrogen moderator absorbs up to 30 kW from the spallation neutrons; the heat is removed by a hydrogen/helium heat exchanger that is in turn cooled by a 15 K supercritical helium flow.
  • The neutron-science instruments require a maximum of 7500 L/month of liquid helium for cooling of components and sample environments.
  • The ESS cryomodule test stand, together with a test stand at Uppsala University in Sweden, allow testing of all cryomodules containing RF cavities at full RF power and various operating temperatures (2 K, 4.2 K and 40 K) prior to installation in the linac tunnel.
  • The test and instruments cryoplant also provides liquid helium to other customers in the Lund region – specifically Lund University and the Max IV synchrotron facility.

Funding and partnership

  • The ESS is a pan-European project with 13 European nations as members: Czech Republic, Denmark, Estonia, France, Germany, Hungary, Italy, Norway, Poland, Spain, Sweden, Switzerland and the UK.
  • Significant in-kind contributions of equipment and expertise – from over 40 European partner laboratories – are expected to finance more than a third of the overall construction costs for ESS.
  • ESS will deliver its first science in 2023, with up to 3000 visiting researchers expected every year once the lab is fully operational.

Advanced MRI methods guide treatments of tremor

Researchers at the University of Texas Southwestern and the Mayo Clinic in the US have presented the latest developments in the use of MR-guided high-intensity focused ultrasound (HIFU) to treat tremor. Writing in the journal Brain, they describe how these advances are expected to lead to improved clinical efficacy and a reduction of adverse effects.

The first-line treatment to reduce the involuntary trembling or shaking associated with neurologic conditions such as essential tremor or Parkinson’s disease is medication. However, around 30% of patients do not respond well to drugs. This led to the investigation of alternative therapies to alter the connectivity of the thalamus, a symmetric structure composed of grey matter that sits on top of the brain stem and serves as a relay for motor and sensory signals between the body and the rest of the brain.

In the 1990s, deep brain stimulation was developed, in which metal electrodes implanted in the thalamus were stimulated via a battery pack, similarly to a pacemaker. A couple of decades later, advances in MRI allowed real-time guidance of HIFU beams to non-invasively heat and eliminate small sections of the thalamus with millimetre precision.

The main challenge in both of these procedures, however, lies in the correct targeting of the ventral intermediate (VIM) nucleus, a pea-sized region located at the centre of the thalamus that’s involved in the coordination and planning of movement. Precise localization of this structure is important to prevent adverse effects due to incorrect targeting during these procedures, such as speech and swallowing deficits, or sensory and gait abnormalities. While these side effects are usually temporary, they can be permanent in 15 to 20% of cases.

Bhavya Shah

In their paper, Bhavya Shah and colleagues highlight the limitations of current techniques for localizing the VIM nucleus. Established targeting methods either use atlases to identify brain areas from a collection of correctly labelled brain scans or require the use of landmarks on the patient’s brain scans.

We now know that such atlases and landmarks are too simplistic, and that the connections and biology of the brain are too complex and patient-specific to be accurately captured through these methods. Indeed, studies have shown that these approaches can introduce targeting errors of up to 5 mm, causing unacceptable adverse effects.

MRI methods to the rescue

In contrast with traditional localization techniques, the researchers highlight three newly refined MRI techniques that are proving better at delineating the target tissue.

Diffusion tractography seems to be the most promising. It creates precise 3D brain images by taking into account the natural water movement within tissues, which allows identification of not only the VIM nucleus location but also its shape.

Another technique hailed as superior to established targeting methods is quantitative susceptibility mapping, which creates contrast in the image by detecting distortion in the magnetic field caused by substances such as iron or myelin. Finally, the researchers cite fast grey matter acquisition T1 inversion recovery as showing promising results. This technique operates much like a photo negative, turning the brain’s white matter dark and its grey matter white, in order to provide greater detail in the grey matter.

All of these MRI sequences are already approved by the US Food and Drug Administration for guiding HIFU beams during localized ablation of the thalamus. The object of an increasing number of studies, they should improve clinical efficacy and reduce adverse effects from the treatment.

Copyright © 2025 by IOP Publishing Ltd and individual contributors