Skip to main content

Quantum-tunnelling time is measured using ultracold atoms

The time it takes for an atom to quantum-mechanically tunnel through an energy barrier has been measured by Aephraim Steinberg of the University of Toronto and colleagues. The team observed ultracold atoms tunnelling through a laser beam, and their experiment provides important clues in a long-standing mystery in quantum physics.

Quantum tunnelling involves a particle passing through an energy barrier despite lacking the energy required to overcome the barrier, as required by classical physics. The phenomenon is not fully understood theoretically, yet it underpins practical technologies ranging from scanning tunnelling microscopy to flash memories.

There has been a long controversy about the length of time taken to cross the barrier – a process that cannot be described as a classical trajectory. This problem arises because quantum mechanics itself provides no prescription for it, explains Karen Hatsagortsyan of the Max Planck Institute for Nuclear Physics in Heidelberg, Germany. “Many definitions have been invented, but they describe the tunnelling process from different points of view”, he says, “and the relationship between them is not simple and straightforward.”

Angular streaking

Hatsagortsyan was involved in one of several recent experiments that looked at electrons escaping from atoms by light-induced ionization in a strong electric field – a process that involves the electrons tunnelling through a barrier as “wave packets” with a range of velocities. These experiments use a phenomenon called angular streaking, which establishes a kind of “clock” that can measure tunnelling with a precision of attoseconds (10-18 s).

Because the peak of the wave packet is produced by interference effects, its behaviour does not follow our classical intuitions: it can seem to move from one side of the barrier to the other faster than light, in defiance of special relativity. This is because “there is ‘no law’ connecting an incoming and an outgoing peak”, says Steinberg. “Even if the peak appears at the output before the input even arrives, that doesn’t mean anything travelled faster than light.”

The method does not, however, really correspond to any previously defined picture of what the tunnelling time is, says Alexandra Landsman of the Max Planck Institute for the Physics of Complex Systems in Dresden, who led some of the other “attoclock” ionization studies of tunnelling. Rather, it was a way to “pick out a ‘correct’ physical definition of tunnelling time among a number of competing proposals”, she says.

More controversy than consensus

But these experiments seem to have created more controversy than consensus – partly because it is not clear how to define the time at which tunnelling “starts”. Recently a team based mostly at Griffith University in Nathan, Australia, concluded from the same approach that particles might tunnel more or less instantaneously.

It may all be a question of definitions. “Sometimes”, says Steinberg, “there’s a quantity you can measure a number of different ways, and since they all give the same answer classically, we think these different measurements are probing the same thing.” But they’re not necessarily – in which case “two different measurements both of which we expected to reveal ‘the tunnelling time’ can have different results.”

But “even if there isn’t one tunnelling time, neither is there an infinite number of options”, Steinberg adds. “There are maybe two or three timescales, and we need to work to understand what each one describes.”

Steinberg’s team approached the problem by measuring a definition of the tunnelling time determined by a kind of internal clock in the particles themselves. For particles, they use a cloud of about five to ten thousand ultracold atoms of rubidium propelled gently towards a barrier induced by a light beam.

Spinning clocks

The atoms each have a spin that, when placed in a magnetic field, will rotate (precess) at a known frequency, which is the ticking clock. The apparatus is arranged so that the particles will only experience an effective magnetic field inside the barrier itself. By measuring how much the orientation of their spin has changed when atoms exit the barrier, they obtain a measure of how long the particles spent “inside” the barrier.

Proposed more than 50 years ago by two Russian physicists, Steinberg’s team created the experiment using atoms that adopt a collective state called a Bose-Einstein condensate (BEC), described by quantum mechanics. The atoms have relatively long quantum wavelengths – a micron or so, which means they can penetrate relatively wide barriers, with long passage times of about a millisecond or so – which can be measured precisely. “We want particles with a well-controlled starting state and a very long wavelength”, says Steinberg. “A BEC is an ideal way to produce this.”

“Honestly, when we started these experiments”, he adds, “partly I just wanted to see with our own eyes that a composite particle with 87 nucleons and 37 electrons [that is, rubidium atoms] could really tunnel all the way across a barrier 10,000 times larger than an atom itself”.

Laser tweezer

To create the barrier and have the atoms impinge on it, the team used two laser beams. “We Bose-condense them inside an attractive ‘laser tweezer’ beam, which acts as an optical waveguide”, says Steinberg. Then they use a magnetic field to give the atoms a little push towards the barrier, moving at a few millimetres per second.

The barrier is created by a blue laser beam, focused to be about a micron wide, with a frequency slightly greater than that of one of the resonances in the atoms. Inside this laser’s intense electromagnetic field, the atoms will interact with it. But “the atoms can’t keep up, oscillate out of phase with the field, and end up in a higher-energy state”, says Steinberg. This means that the blue laser acts as a repulsive potential, about 1.3 micron wide, through which just a few percent of the atoms can tunnel. To reduce their random thermal motions, the researchers chill the system to a temperature of about 1 nK.

Deducing the tunnelling time is then a matter of measuring how the spin angles of the atoms in the trap have changed when they exit it. In this way, says Steinberg, “we are probing the dwell time of transmitted atoms in the barrier.” They find that this is about 0.62 ms.

“This is a remarkable experiment,” says Hatsagortsyan. It is “especially nice because the quantity [tunnelling time] it measures is well defined”, says Landsman. “The findings may have practical implications for tunnelling devices, since the measured time seems to correspond to the time the electron actually spends inside the barrier”.

Steinberg adds that the technique could reveal something about the trajectory within the barrier itself. “We hope in the future to restrict our effective magnetic field to regions even smaller than the barrier”, he says, “so that when we look at the final spin, we’re measuring not how much time the atom spent somewhere ill-defined in the barrier, but in one particular region.” According to one theoretical description, he says, it looks as though a particle “appears on the far side without ever crossing the middle. This is what we’d like to test.”

The research is described in Nature.

Hubble trouble: a rapidly expanding cosmic debate

Video transcript

The universe is expanding at an accelerating rate, but just how fast is that expansion?

We thought we knew the answer to that question. But recent findings have cast doubt on our understanding.

This story begins in the US in the 1920s. Back then, most astronomers believed that our galaxy was all that existed – it was the entire universe.

But observations by the astronomer Edwin Hubble would transform our understanding of our place the cosmos.

During the previous decade, the American astronomer Henrietta Swan Leavitt had identified the relation between the luminosity and the period for a class of pulsing star known as Cepheid variables.

This key discovery enabled astronomers to accurately predict the distance to these stars, providing a sort of cosmic tape measure for measuring far larger distances than had been possible before.

While studying cepheid variables in the spiral nebulae, Hubble realised that these stars must be located far beyond our galaxy.

That led to the realisation that the Milky Way was just one galaxy among many.

Soon, Hubble discovered that almost all of these other galaxies are moving away from us. This led the Belgian cosmologist Georges Lemaître to conclude that the universe is expanding.

Hubble and Lemaître independently derived a mathematical relationship to describe this expansion. The recession velocity (v) of a galaxy is equal to its distance (D) multiplied by the Hubble constant – a value which describes the rate of expansion at the current time.

Traditionally, the Hubble Constant is determined by measuring the distance and recession velocity of galaxies, by using astronomical objects of known brightness. These so-called “standard candles” include type 1a supernovae – which are basically exploded white-dwarf stars that have a certain critical mass.

Now, let’s fast-forward to 2013.

By analysing maps of the Cosmic Microwave Background from the Planck mission, astronomers were able to calculate our most precise value yet of the Hubble constant: 67.4 kilometers per second per megaparsec.

In itself, it can be hard to get your head around what the Hubble Constant means. But it matters because it dictates the age of the universe, and our understanding of how it evolved.

So it was a big shock in 2016 when a project led by Nobel Prize winner Adam Riess arrived at a significantly higher value for the Hubble Constant: 73.2 kilometers per second per megaparsec. That value would suggest the universe is younger than we thought.

Riess led the SH0ES Project, which involved measuring our old friends the Cepheid variables, the same type of star that had enabled Swann Leavitt’s and Hubble’s breakthroughs.

The SH0ES result appears to be backed up by another project with a quirky name, H0LiCOW.

That project, led by Sherry Suyu, calculates the Hubble constant using another ingenuous method: the gravitational lensing of light emitted from quasars. These are luminous active galactic nuclei that exhibit brightness variations.

Looking ahead, if the discrepancy between the Planck measurement and the more local measurements of the Hubble constant becomes stronger, then we might be looking at new physics.

Another different measurement of the Hubble constant may come from the emerging field of gravitational-wave astronomy.

In theory, astronomers could get an accurate value of the Hubble Constant by observing the gravitational waves and light emitted by the merger of two neutron stars. But unfortunately, these incredibly energetic events, known as kilonovae, are proving to be few and far between.

Find out more about the Hubble constant mystery in the July 2020 issue of Physics World.

From entangled earbuds to Star Wars: more examples of ‘transmogrified physics’

In January I revealed the discovery of “transmogrified physics”, a field of science consisting of surprising and hitherto unnoticed connections between the microscopic and the macroscopic worlds.

As an example I cited screwdriver oscillation, the phenomenon whereby you find only flathead screwdrivers in your toolbox when you urgently need one with a Phillips head, but when you need flatheads there are only Phillips heads. Like neutrinos, it appears that screwdrivers can oscillate from one state to another.

I received enough responses to confirm my suspicion that this is an exciting and fertile research territory

After inviting readers to send me reports of similar phenomena linking the micro- and macro-worlds, I received enough responses to confirm my suspicion that this is an exciting and fertile research territory. Many readers reported, for instance, that their Allen wrenches obey the uncertainty principle; when you aren’t looking, your tool chest is full of them, but when you need one there aren’t any – and if you need a specific one you only find the others in the set. Other correspondents said their tools could dematerialize, though they didn’t have enough data points to discover the law involved.

Real examples

Fredy Zypman, head of physics at Yeshiva University in New York, came across several interesting phenomena including the following.

  • Bragg planes Special aircraft used only in special occasions to show off.
  • Centre of mass An attractor of churchgoers.
  • Raman (or Ramen) scattering The phenomenon of spreading of noodles out of a tilted bowl.
  • Change of phase What happens, and is quickly repressed, when one gets a good hand in poker.
  • Resolving power A desired ability of the dean to step in to avoid faculty conflagration.

Michael Hutchings of the Open University is another pioneer in the new field. His discoveries include:

  • Hidden fault On starting up a well-used piece of apparatus there appears to be a serious fault. On inspection one finds an obvious reason, such as one unit not having been switched on. But when you switch it on, you find that a fault still exists.
  • Entanglement When two or more lengths of connecting wire are near each other, particularly in a domestic environment and when roughly coiled, they will become hopelessly entangled. Careful separation of them will inevitably end in a knot in at least one length.

I have seen macro-world entanglement myself in earbuds and Christmas-tree lights. I put on earbuds, for instance, take them off, put them down – and then when I go to put them on again I have to unknot them. What’s going on? I don’t take them off, do gymnastics, and then put them down. Clearly, more than classical physics is involved here.

Another example of transmogrified physics is science reliteracy, which is what happens when people try to give a retrospective, less than credible explanation of their gaffe

Another example of transmogrified physics is science reliteracy, which is what happens when people try to give a retrospective, less than credible explanation of their gaffe. An example occurs in the Star Wars movies. In Episode IV (1977), when we first meet the captain of the starship Millennium Falcon Han Solo, he brags, in what for the scriptwriters was clearly a throwaway line, that he “made the Kessel Run in less than 12 parsecs”. The Kessel Run, as any Star Wars enthusiast knows, is a treacherous hyperspace route that smugglers use to convey spices (essentially the Star Wars euphemism for drugs) from Kessel to wealthy customers. But, as any half-scientifically literate person knows, a parsec is not a unit of time but of distance; it’s an abbreviation of “parallax arcsecond,” or 3.26 light-years.

It was clearly an embarrassing slip-up in an otherwise meticulously detailed series of movies – no matter how much franchise spokespeople and diehard Star Wars fans try to dance around this issue or claim that they knew all along that parsec was a unit of distance. So in the Star Wars spinoff Solo: a Star Wars Story (2018), the explanation is provided that of course the Kessel Run is a unit of distance; what Solo meant was that the run is normally 20 parsecs long but by taking shortcuts he managed to cut that distance down to under 12 parsecs. Sure.

Whether or not you are gullible enough to really believe the later explanation, it amounts to science reliteracy, which some textbooks also call reverse explaneering.

I also received several examples from a US accelerator physicist who witnesses in political affairs the equivalent of “beam-beam” effects. In a particle-collider context, this term refers to one beam breaking up, amplifying its own instabilities while exposing instabilities in the other. Similarly when the US and China direct criticisms at each other, each ends up illuminating problems in the other’s situation while also magnifying its own. My collider physicist contact has also found “coherent instability” – when attempts to control a disruption don’t reduce but only reinforce it – throughout politics. Take your pick from COVID-19 to America’s handling of international problems with almost any other nation.

Then there’s “Landau damping”, which in accelerator physics is what happens when you successfully counteract coherent instabilities by cleverly arranging elements internal to the beam to interact with it. The same phenomenon can occur in politics when many factions, parties and agencies create a nearly insoluble problem, but then some of them suddenly do something sensible and effective to stabilize the situation, thereby establishing a new peaceful norm.

The critical point

Some readers of my January column accused me of irony and sarcasm, and lack of seriousness. I disagree. Further study of transmogrified physics is bound to reveal much about our surrounding world. I’ll leave you with two of my favourites, both of which are also linked to politics. There’s the screw, a politician who can make followers perform political work by spinning in circles. Finally, I bring you the jerk, a politician who can move with a high rate of acceleration away from a troublesome issue.

Best in physics: clinical prompt gamma measurements and FLASH dosimetry

The “Best-in-Physics” poster session at the 2020 Joint AAPM|COMP Virtual Meeting highlights the top five abstracts in the imaging, therapy and multi-disciplinary science tracks. Our first pick of this year’s top scoring studies examined multi-target tracking during radiotherapy and carotid plaque evaluation using quantitative ultrasound. In our second look at the 2020 winning studies, we report on the first ever prompt gamma ray spectroscopy in a patient, plus an investigation into the use of ionizing radiation acoustic imaging for in vivo FLASH dosimetry.

First prompt gamma ray spectra measured during proton therapy

Proton therapy offers dosimetric advantages over photon-based radiotherapy, such as reducing integral dose by factor of two or three, while delivering the same dose to the target. It should also be possible to use the sharp distal edge of the Bragg peak to create a highly conformal high-dose area around the target. “However, in practise, we can’t do that yet as we don’t know exactly where the protons stop,” explained Joost Verburg, from Massachusetts General Hospital and Harvard Medical School.

To address this problem, Verburg and colleagues are developing prompt gamma-ray spectroscopy for in vivo verification of proton range. Prompt gamma rays are generated when the therapeutic proton beam interacts with atomic nuclei within the patient. The idea is to measure the range of every proton pencil beam, during treatment delivery, by acquiring energy- and time-resolved prompt gamma-ray spectra. These spectra are compared with a nuclear reaction model to determine the range deviation of the pencil-beam compared with the calculated treatment plan.

Verburg described his team’s prototype system and presented the first measurements of prompt gamma-ray spectra obtained during a patient treatment.

The MGH/Harvard team has created a full-scale prototype prompt gamma-ray spectroscopy system, comprising a collimated array of fast scintillation detectors mounted on a 7-axis robotic positioning system for patient alignment. Initial tests in phantoms revealed a typical measured range error of less than 1 mm. “This shows that in a situation where we actually know the materials and know where the protons end up, we are measuring exactly what we are supposed to,” Verburg noted.

The first patient recruited in the team’s clinical study received proton therapy for a complex base-of-skull meningioma. The researchers measured prompt gamma spectra of one treatment field once per week for five weeks. Results were consistent from week-to-week, with a systematic range deviation between the delivered proton range and the treatment plan.

The mean range error was just 1–2 mm, but Verburg noted that not all pencil beams stopped exactly where they were planned. They observed a spread of around 3 mm sigma and a maximum range error of about 8 mm. Compared with traditional range margins used for proton therapy (3.5% + 1 mm, or 6.6 mm for this case), these errors fit within traditional margins.

“We successfully performed first ever prompt gamma ray spectroscopy in a patient,” Verburg concluded. “The range errors that we measured were mostly consistent between fractions and within traditional range margins that are applied in proton therapy. The test in phantoms also showed that our prompt gamma ray spectroscopy system is capable of aiming protons very precisely with a range position of around 1 mm. This shows great potential – if we measure and fine tune the range of the proton beams in the patient, we can reduce the need for margins and design better proton treatments in future.”

Ionizing radiation acoustic imaging offers in vivo FLASH dosimetry

FLASH radiotherapy is an emerging treatment modality that uses ultrahigh dose rates (above 40 Gy/s) to spare normal tissues and improve the therapeutic ratio compared with conventional radiotherapy. The instantaneous delivery of higher dose rates, however, increases the need for reliable beam localization and dose monitoring tools, particularly when dealing with deep seated tumours, where current dosimeters can’t provide an adequate reading for safe delivery.

Noora Ba Sunbul from the University of Michigan is investigating the use of ionizing radiation acoustic imaging (iRAI) for in vivo FLASH dosimetry. “The main aim of this work is to fully develop a comprehensive simulation workflow to test the feasibility of iRAI as a reliable real-time dosimetry tool for FLASH radiotherapy,” she explained.

When a pulsed beam of therapeutic radiation strikes tissue it causes localized dose deposition and thermal expansion, which generates acoustic waves. iRAI works by detecting these waves with an ultrasound transducer and using them to construct dose-related images in real time.

To test their approach, Ba Sunbul and colleagues used a modified linac to deliver a 6 MeV electron beam in FLASH mode to a gelatin phantom. The field was collimated to 1×1 cm and an ideal transducer placed 10 cm from beam centre. They also simulated the detection of induced acoustic waves following FLASH using Monte Carlo and k-Wave. This involved simulating the full 3D dose distribution in the phantom, using this to define the initial pressure source, modelling the acoustic wave propagation and then reconstructing an image.

“Comparing dose profiles at different depths with our film-measured results showed acceptable agreement between measured and simulated results, with less than 6% error at depths of less than 2 cm,” said Ba Sunbul.

The team also used the instantaneous pressure signal to define the edges of the beam, determined by the change in this pressure signal between the phantom entrance and exit. iRAI could identify the central beam edges within about 4% of the film measurement.

The linac pulse duration and repetition rate, which define the dose rate, were seen to be inversely proportional to the instantaneous pressure signal amplitude, as well as the dose. Ba Sunbul explained that the pulse duration affects both the temporal and spatial resolution of the 2D reconstructed iRAI images, with longer pulses creating lower-resolution images. She noted that this negatively affects the beam localization ability of iRAI.

“We have developed a full simulation workflow for testing iRAI applicability in FLASH radiotherapy using ideal point-source ultrasound transducers,” Ba Sunbul concluded. The next step, she added, will be to simulate iRAI image reconstruction to mimic experimental set-ups, noting that iRAI’s ability to detect beam edges has already been verified experimentally using conventional radiotherapy in rabbit measurements and in a liver phantom.

Therapeutics firms share inaugural prize for health physics

Companies that use physics to unlock next-generation drugs for respiratory disorders and enable advanced cancer treatments are the first recipients of a new award for early-stage companies in healthcare and medicine. Nebu~Flow, a University of Glasgow spin-out that is commercializing an acoustic nebulizer, and Cellular Highways, a Cambridge-based start-up that develops microfluidic devices for use in cell therapies, each received the inaugural Lee Lucas Award and £5000 in prize money as part of the annual Business Awards presented by the Institute of Physics (IOP).

Speaking on 15 July at the first of two webinars dedicated to the 2020 Business Awards, IOP vice-president for business James McKenzie praised Cellular Highways, Nebu~Flow and 11 other award-winning companies for their work in bringing physics-based products to the market. “The awards recognize the significant contribution that physics makes to industry and business, and everyone here has done an incredibly good job of building a business off of physics,” he told the online audience.

Better drug delivery, faster cell sorting

In accepting his company’s award, Nebu~Flow chief executive Elijah Nazarzadeh noted that, thanks to the COVID-19 pandemic, “everyone is now aware of the importance of respiratory disorders”. Even before the coronavirus, though, conditions such as asthma and chronic obstructive pulmonary disease killed one person in the UK every five minutes on average, and Nazarzadeh pointed out that current drug treatments suffer from a significant flaw: the medication doesn’t always reach the right part of the patient’s lungs.

A fine mist of aerosolized spray rises from the Nebu~Flow device as it undergoes tests to measure the size of droplets it produces

People with respiratory disorders, he explained, typically receive medication via nebulizers – machines that turn liquid drugs into aerosolized, inhalable droplets. The problem is that many of these droplets are too big or too small to reach the tiny air sacs in the lungs where carbon dioxide gets exchanged with life-sustaining oxygen. Nebu~Flow’s award-winning device tackles this problem by using controlled acoustic waves to create a more uniform droplet size, and the strategy seems to be paying off. In pre-clinical trials, the Nebu~Flow device increased the inhalable fraction of a common asthma drug, salbutamol, from 60% to 90%.

Like Nazarzadeh, Cellular Highways’ chief executive Samson Rogers began his talk by focusing on patients. Although cell-sorting technology is already an important tool in basic research, clinical applications such as cell therapies – in which cells are removed from the patient, assessed for their disease-fighting capabilities, and then selectively reintroduced into the body – require sorting speeds and volumes much higher than existing machines can deliver.  Cellular Highways’ vortex-actuated cell sorting (VACS) technology works by generating a transient vortex within a microfluidic channel. When this vortex flows downstream, Rogers explained, a cell is carried with it and gently deflected across the streamlines to be sorted. Each deflection takes place within a time envelope of 23 microseconds, making VACS “the fastest chip-based cell sorter yet invented”, according to the award citation.

Entrepreneurial inspiration

The Lee Lucas Award was made possible by a donation from a retired entrepreneurial physicist, Mike Lee, and his wife Ann (née Lucas). The pair were inspired to endow the award after reading a Physics World column in which McKenzie discussed the financial and reputational benefits that awards bring to early-stage companies.

In addition to the two Lee Lucas Award winners, four other firms – Advanced Hall Sensors, Hirst Magnetic Instruments, Promethean Particles and Thornton Tomasetti Defence – received IOP Business Innovation Awards for “delivering significant economic and/or societal impact through the application of physics”. A further seven companies (FeTu, Geoptic, ORCA Computing, Oxford HighQ, OxMet Technologies, Photon Force and QLM Technology) received IOP Business Start-up Awards, which honour companies for physics-based inventions at an earlier stage of commercial development.

Additional information about all 13 award-winners is available on the IOP website.

Starspot study sheds light on why some red giants spin faster than others

Some red-giant stars are rotating much faster than previously thought, according to a study led by Patrick Gaulme at Germany’s Max Planck Institute for Solar System Research. Using NASA’s Kepler space telescope, the astronomers found that about 8% of the red giants they observed are rotating fast enough to display starspots. The team reckons that the elderly stars acquire their rapid rotation by following one of three distinct routes in their evolution.

In main sequence stars like the Sun, the complex interplay that occurs between stellar rotation and the motions of plasma creates incredibly lively magnetic fields. When this magnetic activity is particularly strong, upwelling plumes of plasma in a star’s convective outer layers can be blocked, producing dark patches on its surface. To an observer on Earth, these starspots cause a periodic variation in the star’s brightness as it rotates, bringing the spots in and out of our field of view.

Until recently, starspots were not thought to be present on red giant surfaces. Since these older stars expand rapidly as they move out of the main sequence, while maintaining their angular momentum, previous theories had predicted that they must rotate more slowly than main sequence stars. Slower rotation should reduce magnetic activity, preventing starspots from forming.

Periodic brightness variations

Gaulme’s team tested this idea by analysing a sample of around 4500 red giants, gathered by Kepler between 2009 and 2013. In contrast to previous theories, they found that 370 of the stars – around 8% of them – displayed periodic brightness variations that could only be explained by starspots passing across their surfaces.

Some recent studies have suggested that red giant starspots could appear in binary systems – in which a red giant can acquire angular momentum from its companion star until their rotations are synchronized. This was indeed the case for some of the observed red giants, but this still only accounted for 15% of the spotty stars analysed by the team.

Engulfing planets

By following the clues offered by their oscillations, Gaulme and colleagues concluded that the remainder of the red giants fell into two groups. The first included stars with similar masses to the Sun, and the team believes these red giants acquired angular momentum as they engulfed orbiting planets and binary companions during their expansion.

In contrast, more massive stars in the second group displayed lower magnetic activity during their time as main sequence stars. Their quieter environments prevented material from escaping, allowing the stars to retain angular momentum. Therefore, despite slightly slowing down during their evolution, these stars still preserved enough magnetic activity to display spots after becoming red giants.

The team now hopes to improve their understanding further using ESA’s PLATO mission, which is scheduled to launch in 2026.

The study is described in Astronomy and Astrophysics.

Growing the gravitational-wave network

How has the COVID-19 pandemic affected work on the twin Laser Interferometer Gravitational-Wave Observatory (LIGO)?

We were near the end of the our third observing run, which was planned to end on 30 April, when the pandemic hit. COVID-19 forced us to shut down all operations at the LIGO observatories – located at Hanford, Washington and Livingston, Louisiana – on 27 March, about a month shy of the planned end date. Still, the run was a great success, with 56 gravitational-wave candidates detected in 11 months – about one each week. We’re now initiating a programme to install a series of detector upgrades, which we call Advanced LIGO Plus (A+), over the next 20 months that will result in gravitational-wave detections every one or two days. Even working remotely, the LIGO team has made good progress on LIGO A+ in the past few months.

When do you expect to restart operations on LIGO?

We tentatively resumed observatory work last month since it was safe to do so and we have established protocols to execute the upgrade programme in a safe and socially distant way.

LIGO consists of over 150 institutions and more than 1300 people from around the world. Would the observatory be possible without such an international scope?

I don’t think so, for many reasons. The California Institute of Technology and the Massachusetts Institute of Technology are the institutions that built and operate LIGO. However, to design and use LIGO effectively you need to look beyond the US. So we looked to where the best science is being done. And historically – 30 or 40 years ago – you find some of the best ideas for interferometry were emerging from Australia, Italy, Germany, the UK and other places. Without these ideas and the people who developed and brought those ideas to maturation I don’t think we could have ever built LIGO, made it work and carried out its scientific programme.

The binary neutron-star merger was spectacular in so many ways. It became evident very quickly that this was a first-of-its-kind event. 

How does the collaboration work with the European laser-interferometer gravitational-wave detector, situated near Pisa, Italy?

Virgo is a different detector from LIGO, built by an independent set of people. The LIGO scientific collaboration joined forces with Virgo in 2007 because of the realization that by combining our data, we could get much more information, particularly in the ability to locate gravitational events in the sky. This international collaboration has greatly expanded our ability to detect gravitational waves and understand where they come from, and allows us to make strong astrophysical statements about what we see.

Japan recently opened the KAGRA gravitational-wave detector and there are plans for an observatory in India. How will this enhance gravitational-wave science?

Having three separate interferometers far away from each other provides a better ability to localize signals over a portion of the sky. By putting a fourth detector in Japan and a fifth detector in India, we’ll be able to rapidly determine good sky localization for gravitational-wave events anywhere in the sky. This is a wonderful thing, because one of our core goals is to tell astronomers that we saw an event in this portion of the sky and they should look at it quickly because they’ll see something very dramatic. This is really what multimessenger astronomy is all about.

What other benefits are there of having multiple detectors?

These detectors are exquisitely sensitive and easily perturbed by small things such as a malfunction in the detector or even somebody driving a truck nearby. By having more detectors, even if you have one or two of them offline, you still get reasonable sky coverage. Redundancy in the network makes a big difference.

Given the impact of COVID-19, are you still keeping in touch with KAGRA and VIRGO?

Yes. Virgo concluded its third observation run at the same time as LIGO. It is now back in limited operations and resuming its Advanced Virgo Plus upgrade programme. The KAGRA collaboration continues to commission its detector and successfully held its inaugural two-week long observing run in April. KAGRA will join LIGO and Virgo in the fourth observation run that is scheduled to begin in early 2022. Equally important, the LIGO–Virgo collaboration is analysing the third observing run data set and continuing to find merging binary systems that are challenging our understanding of neutron star and black hole populations. Stay tuned for new results coming out very soon.

The detection of the first neutron-star merger in 2017 opened up the era of multimessenger astronomy. What did it tell us about the universe?

The binary neutron-star merger was spectacular in so many ways. First and foremost, it fundamentally answered a lot of scientific questions such as that binary neutron-star mergers produce short-order gamma-ray bursts and that they also produce unique conditions in which heavy elements in the periodic table such as gold and platinum are forged.

It must have been incredibly exciting at the time?

It became evident very quickly that this was a first-of-its-kind event. Over the following days, weeks and months, the astronomical community kept looking at the data because, if you pardon the pun, it was such a scientific gold mine. You see gamma rays that go away very quickly, then you see ultraviolet and optical and infrared. Then there is a point when X-rays are emitted as the neutrons hit the interstellar medium. The radio emissions going on for months and months.

SNMMI Annual Meeting highlights nuclear medicine innovation

Total-body dynamic PET

The Society of Nuclear Medicine and Molecular Imaging (SNMMI) held its Annual Meeting last week. Billed as the “Virtual Edition” and run over an interactive virtual platform, the event included live education sessions, scientific presentations, a virtual exhibit hall and several networking events. Here are some highlights from the research presented at this year’s event.

Total-body dynamic PET successfully detects metastatic cancer

Researchers from the University of California-Davis shared results from the first study using uEXPLORER to conduct total-body dynamic PET scans of cancer patients. Developed and validated at UC-Davis, uEXPLORER is the first total-body (194 cm axial field-of-view), high-sensitivity PET/CT scanner being used commercially. A total-body dynamic PET/CT scan of a patient with metastatic cancer generated high-quality diagnostic images that identified multiple metastases

Dynamic 18F-FDG PET with tracer kinetic modelling has the potential to better detect lesions and assess cancer response to therapy, according to the researchers. Unlike conventional PET scanners with limited axial field-of-view (15–30 cm), the uEXPLORER is capable of simultaneous dynamic imaging of widely separated lesions.

“The focus of our study was to test the capability of uEXPLORER for kinetic modelling and parametric imaging of cancer,” explained UC Davis’ Guobao Wang. “Different kinetic parameters can be used in combination to understand the behaviour of both tumour metastases and organs-of-interest such as the spleen and bone marrow. Thus, both tumour response and therapy side-effects can be assessed using the same scan.”

For the study, a patient with metastatic renal cell carcinoma was injected with the radiotracer 18F-FDG and scanned on the total-body scanner. The researchers calculated static PET standardized uptake values (SUVs) and performed kinetic modelling for regional quantification in 16 regions-of-interest, including major organs and six identified metastases. They calculated the glucose influx rate and used additional kinetic modelling to generate parametric images of the kinetic parameters. The kinetic data were then used to explore tumour detection and characterization. They note that parametric images of the FDG influx rate improved tumour contrast over SUV in general and led to improved lesion detection in the renal cortex, which has been historically challenging.

“Total-body dynamic imaging and kinetic modelling enabled by total-body PET have the potential to change nuclear medicine into a multi-parametric imaging method, where many different aspects of tissue behaviour can be assessed in the same clinical setting – much like the information gained from different sequences in an MRI scan,” said Ramsey Badawi, co-director of the EXPLORER Molecular Imaging Center.

Novel PET radiotracer images malignant brain tumours

A first-in-human study has demonstrated the safety and favourable pharmacokinetic and dosimetry profile of 64Cu-EB-RGD – a new, relatively long-lived PET tracer – in patients with glioblastomas. The radiotracer delivered high-contrast diagnostic imaging in patients, visualizing tumours that express low or moderate levels of αvβ3 integrin with high sensitivity.

PET and MRI of a glioblastoma patient

The study included three healthy volunteers and two patients with recurrent glioblastoma who underwent whole-body 64Cu-EB-RGD PET/CT. The new radiotracer has unique qualities, explained Jingjing Zhang, from Peking Union Medical College Hospital. The peptide sequence Arg-Gly-Asp (RGD) specifically targets the cell surface receptor αvβ3 integrin, which is overexpressed in glioblastomas. To slow clearance, Evans Blue dye, which reversibly binds to circulating albumin, is bound to RGD, significantly enhancing target accumulation and retention. The addition of the 64Cu label provides persistent, high-contrast diagnostic images in glioblastoma patients.

The researchers calculated tumour-to-background ratios for target tumour lesions and normal brain tissue in seven sets of brain PET and PET/CT scans obtained over two consecutive days. They also collected safety data (such as vital signs and physical exams, for example) for each participant, one and seven days after the scans.

64Cu-EB-RGD was well-tolerated, with no adverse effects seen immediately or up to one week after administration. The mean effective dose of 64Cu-EB-RGD was similar to that of an 18F-FDG scan. Injection of 64Cu-EB-RGD into patients with recurrent glioblastoma generated high accumulation at the tumour, with continuously increased tumour-to-background contrast over time.

As well as providing a future diagnostic tool, the researchers noted that 64Cu-EBRGD could also be a breakthrough targeted radiotherapy for glioblastoma patients.

64Cu-labelled EB-RGD represents a viable model compound for therapeutic applications, since 177Lu, 90Y or 225Ac can be substituted for 64Cu,” explained Deling Li of Beijing Tiantan Hospital. “We are currently studying the 177Lu homologue to treat glioblastoma and other αvβ3 integrin expressing cancers, including non-small cell lung, melanoma, renal and bone, and hope to build on the current wave of radiotherapies like 177Lu-DOTATATE.”

 PET/MRI approach identifies chronic pain locations

A molecular imaging approach that combines 18F-FDG PET and MRI can precisely identify the location of pain generators in chronic pain sufferers, according to a study from Stanford University School of Medicine. The imaging data led to new management plans for 62% of the study’s participants.

18F-FDG PET has the ability to accurately evaluate increased glucose metabolism that arises from acute or chronic pain generators, such as hypermetabolic inflamed tissues. The researchers sought to develop a clinical 18F-FDG PET/MRI method to accurately localize sites of increased inflammation related to sources of pain.

Imaging pain

The team imaged 65 patients with chronic pain, from the head to the feet, using the 18F-FDG PET/MRI protocol. Principal investigator Sandip Biswal and colleagues used image analysis software to measure maximum standardized uptake values and target-to-background ratios. They identified locations of increased 18F-FDG uptake and compared these to known locations of pain, finding increased uptake in muscle and nerves at the site of pain in 58 patients.

These imaging findings resulted in 36 significant modifications in management plans, such as the recommendation of new invasive procedures, and 16 mild modifications, such as ordering an additional diagnostic test.

The researchers recommend that a study of a large cohort of patients is conducted to validate their findings. “The results of this study show that better outcomes are possible for those suffering from chronic pain,” said Biswal. “This clinical molecular imaging approach is addressing a tremendous unmet clinical need. I am hopeful that this work will lay the groundwork for the birth of a new subspeciality in nuclear medicine and radiology.”

Targeted radionuclide therapy enhances immunotherapy response

Immunotherapy has had limited success in the treatment of prostate cancer, due to a significantly immunosuppressive tumour microenvironment. Now, a preclinical study conducted at the University of Wisconsin-Madison has shown that targeted radionuclide therapy creates a favourable tumour microenvironment in prostate cancer, which improves the effectiveness of immunotherapies.

“Understanding this treatment dynamic, our goal was to demonstrate that systemically delivered targeted radionuclide therapy provides beneficial immunomodulatory effects that may enhance the response of prostate cancer to immunotherapies,” said Reinier Hernandez.

Y-86-NM600 uptake

The researchers administered the imaging agent 86Y-NM600 to mice with prostate tumours and performed PET/CT scans 3, 24, 48 and 72 hr after injection. Analysing radiotracer uptake in tumours and in healthy tissues allowed them to estimate the dosimetry for targeted radionuclide therapy using 90Y-NM600. They then gave groups of mice either a high or low dose of 90Y-NM600, and monitored tumour growth and survival for 60 days.

PET/CT data revealed that 90Y-NM600 immunomodulates the prostate tumour microenvironment by modifying tumour-infiltrating lymphocyte populations, upregulating checkpoint molecules and promoting the release of pro-inflammatory cytokines. The team found that 90Y-NM600’s pro-inflammatory effects on the tumour microenvironment were elicited at relatively low radiation doses without incurring systemic toxicity.

“Our results provide a rationale for combining targeted radionuclide therapy with immunotherapies, which, so far, have proven ineffective in prostate cancer. Improving immunotherapy in prostate cancer could bring about a potentially curative treatment alternative for advanced-stage patients,” said Hernandez.

The researchers noted that the study also presents a paradigm change in targeted radionuclide therapy, where the maximum tolerable dose may not always be the most beneficial to patients. “A key finding of our work is that immunomodulation is best achieved with a relatively low radiation dose to the tumour which does not affect the normal immune system,” Hernandez explained.

Quantum squeezing is achieved at room temperature

An optomechanical device that adjusts – or “squeezes” – the uncertainties in the quantum properties of laser light has been developed by Nancy Aggarwal at the Massachusetts Institute of Technology and colleagues. The team created their source of squeezed light using a mirror that oscillates under radiation pressure, but displays little thermal fluctuation, even at room temperature. Their approach could soon be used to boost the performance of gravitational wave detectors.

When researchers make measurements on a laser signal, uncertainties arise from quantum fluctuations in the numbers of photons detected and the times at which the photons arrive at the detector. The relationship between this pair of uncertainties is described by the uncertainty principle, which dictates that a decrease in uncertainty in photon number must be accompanied by an increase in uncertainty in timing and vice versa. Reducing the uncertainty of one measurement at the expense of increasing the other can be advantageous in some experiments — and is called quantum squeezing.

Currently, squeezing light must be carried out at cryogenic temperatures to minimize thermal fluctuations – and this requires bulky experimental equipment. Now Aggarwal’s team has created a room temperature system involving two opposite-facing mirrors within a spherical cavity, housed in a vacuum chamber. One of the mirrors has a radius of 1 cm and is permanently fixed in place, while the other is just 70 micron across and is supported by a moveable cantilever of about the same size.

Radiation pressure

When a laser is fired into the cavity, radiation pressure imparted by its photons forces the smaller mirror to oscillate. This movement creates correlations between the number of photons hitting the mirror and the timings of those photons. By fine-tuning their setup, therefore, Aggarwal and colleagues could cause the cavity to squeeze light by adjusting the uncertainties in numbers and timing.

To avoid the need for cold temperatures, Aggarwal’s team constructed their oscillating mirror from alternating layers of gallium arsenide and aluminium gallium arsenide – both of which contain pure, highly ordered atomic structures. Within this composite material, any heat-driven collisions between electrons were suppressed. This meant that instead of jittering due to thermal fluctuations, the mirror’s motion was dominated by light-driven radiation pressure. For the first time, this allowed the team to produce squeezed light at room temperatures, and at a broad range of frequencies; demonstrating a reduction in quantum noise of 15% compared with previous techniques.

Perhaps the most exciting potential application for the team’s device will be improvements to the LIGO and Virgo gravitational wave detectors, which require compact and stable setups, as well as constant operation at room temperatures. Through future research, Aggarwal and her colleagues now hope to adapt their setup to work with all possible wavelengths of incoming laser light.

The new system is described in Nature Physics.

Space-based spectrometer enters neutron lifetime debate

While protons remain stable for at least 1034 years outside the nucleus, a free neutron survives for just under 15 minutes before it decays. The neutron’s precise lifetime is, however, a matter of some debate, as the two techniques commonly used to measure it have produced conflicting results of 880 and 888 seconds.

A team of researchers at the Johns Hopkins University Applied Physics Laboratory in the US led by Jack Wilson has now put forward a third, radically different technique that involves measuring the number of neutrons near a planet. Using data acquired by the neutron spectrometer on NASA’s MESSENGER spacecraft during flybys of Venus and Mercury in 2007 and 2008, they calculated the neutron lifetime to be 780 +/- 90 seconds. While this measurement has a large uncertainty, the researchers note that the MESSENGER instrument was never designed to perform studies of this type – meaning that a dedicated instrument on a future mission could produce a measurement with much higher precision.

Bottle and beam methods

The established techniques for measuring neutron lifetime are both laboratory-based. In the first, known as the “bottle” method, researchers use magnetic fields or mechanical forces to confine low-energy neutrons in a trap. They then count the number of particles remaining after a fixed period of time. In the “beam” method, they count the number of decay products – protons, electrons and antineutrinos – from a beam of neutrons.

The eight-second discrepancy between the neutron lifetimes measured using the bottle and beam methods has important implications for fundamental physics. For example, the neutron lifetime is a key parameter in studies of nucleosynthesis (element formation) in the Big Bang. Resolving this discrepancy (which has been blamed on unknown experimental errors) is therefore key to understanding how our universe formed 13.8 billion years ago.

The MESSENGER spacecraft’s neutron spectrometer was designed to measure Mercury’s surface composition, and to determine whether the planet’s poles might contain water ice. The instrument consisted of a 103 cube of borated plastic scintillator sandwiched between two 4-mm thick 100cm2 lithium-glass plates that are sensitive to neutrons created via the neutron capture reaction 6Li +n 3H + 4He.

This reaction occurs when cosmic rays strike Mercury’s atmosphere or surface, so in effect, MESSENGER’s spectrometer measured the rates at which neutrons “leak out” from the planet. The number of neutrons detected depends on how long it takes the particles to “fly up” and reach the spacecraft’s neutron spectrometer. Hence, the shorter the neutron’s lifetime, the fewer the neutrons that survive long enough to reach the detector.

Venus flybys

Before MESSENGER entered Mercury’s orbit, it went through a series of flybys of Earth, Venus and Mercury. During the second Venus flyby, its neutron spectrometer was switched on to check that the instrument was working properly. The data used in this study, which is published in Physical Review Research, were taken during this Venus encounter and during MESSENGER’s first flyby of Mercury.

Venus’ atmosphere is both simple and relatively uniform, containing mainly carbon dioxide (96% by volume) and nitrogen (most of the remainder). Because MESSENGER made observations over a large range of heights above the planet’s surface, the researchers were able to measure how the neutron flux changes with distance.

“The basis of our measurements is a set of models of the neutron production, propagation and detection during these flybys that are modelled with different lifetimes,” Wilson explains. “The shorter the neutron lifetime, the more rapidly the neutron counts decrease with altitude and the model that best fits the data gives us the lifetime.”

A “giant bottle experiment”

The researchers describe their spacecraft-based technique as being conceptually like a giant bottle experiment, one that uses Venus’ gravity to confine neutrons for periods comparable to their lifetime. However, they emphasize that its systematic uncertainties are completely different from those of previous measurements. This, they argue, is potentially a big advantage, as the most likely cause of the disagreement between the beam and (conventional) bottle techniques is that one or both of them underestimated or missed a systematic error.

The work is a proof-of-principle demonstration that such an approach is at least possible, Wilson tells Physics World. More progress might be made using other planetary neutron spectrometer data sets, but he suggests that the best path forward would be to design a dedicated instrument on a future mission optimized to measure neutron lifetime from orbit. Venus, he adds, is a good candidate in this respect thanks to its thick atmosphere and large mass that effectively traps neutrons.

“The main thing that held our measurement back was the short time MESSENGER spent at Venus (roughly 45 minutes),” he says. “If we could spend a longer time there, we could improve the statistics of the measurement and avoid introducing the systematics associated with using the data from this spacecraft.”

Copyright © 2025 by IOP Publishing Ltd and individual contributors