Skip to main content

Alien astronomers on hundreds of nearby exoplanets could have spotted life on Earth

Over the past 25 years astronomers have observed thousands exoplanets – planets that orbit stars other than the Sun. So, it stands to reason that alien astronomers on exoplanets may have observed Earth. Now, Lisa Kaltenegger, director of Cornell University’s Carl Sagan Institute, and astrophysicist Jackie Faherty, a senior scientist at the American Museum of Natural History have created a catalogue of nearly 2000 nearby stars from which an observer on an exoplanet could spot Earth using the transit method.

A transit can be observed when an exoplanet’s orbit takes it in front of its star as viewed from Earth. This causes a periodic dip in light from the star and such observations play an important role in the ongoing discovery of exoplanets.

“We identified 1715 stars within about 300 light–years from the Sun that are in the right position to have spotted a transiting Earth from around 5000 years ago, a period that roughly corresponds with the rise of humanity,” Kaltenegger tells Physics World. “Because the universe is dynamic, and stars move, this cosmic front row seat is both gained and lost. An additional 319 stars will enter this special vantage point in the next 5000 years.”

Tuning in to Earth

The duo imposed a further limiting distance of 100 light–years to highlight worlds that could have received human-transmitted radio waves since the dawn of radio stations around a century ago. Estimating that around 25% of stars are orbited by potentially habitable rocky exoplanets, the scientists calculated the number of Earth-like worlds that fall within these distances.

“Within the 300 light–years there should be about 500 potential habitable worlds, within 100 light–years you find 29 planets that radio waves will have already washed over,” explains Kaltenegger. Although astronomers have not probed this region for Earth-like exoplanets, it crowded with stars and we already know of seven exoplanets in the habitable zones of their stars – orbits that favour the emergence of life.

“Who knows if life evolved there too, but if it did, and it had a similar technology level that we have, then such nominal alien observers could have spotted or will spot life on our own world.”

Image of the Milky Way

The duo’s work marks the first time that researchers have considered the Earth-transit vantage point as a changing system. This has been made possible by a recent data release from European Space Agency’s Gaia space telescope, which is creating a 3D image of the Milky Way.

“There have been other phenomenal catalogues that preceded Gaia but no other observatory reached the same quantity and depth,” explains Faherty. “Gaia is capable of mapping the lowest mass stars  –  the M class dwarfs  – in tremendous detail and they are the most numerous stars in the galaxy.”

Faherty explains that instead of asking the question “What can see us right now?” Gaia allowed the duo to “wind the clock backwards and forwards” to see where the stars’ motions have taken them and how long they have been able to occupy the perfect seat to see Earth as a transiting planet.

“The Gaia Catalogue has enabled a fresh and detailed dynamic look at the galaxy,” she adds.

Amongst the systems that have enjoyed prime Earth-viewing time in the past is Ross 128, which is 11 light–years from Earth. The system consists of a red-dwarf star orbited by Ross 128 b, a super-Earth with a diameter around twice that of our planet.

“Intelligent life on Earth”

“Any civilization with our level of technology could have seen us already on Ross 128b  but lost that vantage point about 900 years ago . Would anyone have concluded that there was intelligent life on Earth 900 years ago?” asks Kaltenegger.

The Trappist-1 system with its seven exoplanets will enter the Earth Transit Zone in around 1640 years. It is about 45 light-years away and at least four of its exoplanets occupy that system’s habitable zone and will remain in front row seats for around 2300 years.

Whilst Earth moves into view for these exoplanets, astronomers continue to perfect the tools they use to discover and investigate planets outside the solar system. Playing a key role in this work will be the James Webb Space Telescope (JWST), which should launch later this year.

“If there are worlds around any of these stars, then we can use JWST to try and glean information about their atmospheres,” says Faherty. “Astronomers are hot on the trail of tracking down what biosignatures might reveal themselves using sophisticated light gathering techniques.”

Whilst we are searching for those markers with increasingly sophisticated equipment, it is enthralling to entertain the idea that other life forms could be simultaneously searching for us.

“To me this research embeds us in not only space but also in time, telling us that we are lucky to find the exoplanets we do because our cosmic vantage-point also changes with time and will be lost and gained for different worlds,” Kaltenegger concludes. “There are 2043 objects in the night sky that could have already spotted us as a transiting world. If there were life on any planets around them, I wonder what they would think of us?”

The research is described in Nature.

Fuel flow, pressure and heat fluctuations drive combustion oscillations in rocket engines

Combustion oscillations

Researchers in Japan have identified a feedback loop that drives damaging combustion oscillations in rocket engines. They found that thermoacoustic power sources created as the oxidizer and fuel flow into the engine’s combustion chamber lead to highly synchronized fluctuations in fuel flow, pressure and heat.

Combustion engines power much of our transport technology and are also used in turbines for power generation. But these engines can develop high-frequency oscillations, which cause structural damage, shortening the life span of combustors and potentially making them unsafe.

“Thermoacoustic combustion oscillations, which are a self-sustaining instability, arise from the strong mutual coupling among hydrodynamics, acoustic waves and heat release rate fluctuations inside a combustor,” Hiroshi Gotoda, a mechanical engineer at Tokyo University of Science, tells Physics World. He notes that they are classified according to combustion chamber pressure fluctuations, depending on the range of dominant frequencies in the chamber, as low- (50 Hz), intermediate- (50–1000 Hz) and high-frequency (above 1000 Hz) oscillations.

Gotoda adds that these combustion oscillations hinder the development of combustors for rocket and aircraft engines, and land-based gas-turbine power plants, because of the unacceptable structural damage they can cause. An in-depth understanding of the causes of these combustion oscillations is needed.

Computational modelling

In their latest research, published in Physics of Fluids, Gotoda and his colleagues used a computational model of a rocket combustor to study combustion events and combustion oscillations, using sophisticated time-series analytical methods, based on based on information theory, symbolic dynamics and complex networks. The aim of the work was to examine the physical mechanisms underlying the formation and sustainment of high-frequency combustion oscillations. In particular, they were interested in the feedback processes between flow velocity fluctuations in the fuel and oxidizer injectors, and pressure and heat release rate fluctuations in the combustor.

Rocket engines use fuel injectors to deliver a fuel, usually hydrogen, and oxygen, the “oxidizer”, to a combustion chamber where ignition and subsequent combustion of the fuel occurs. The researchers discovered a feedback relationship between fluctuations in the flow velocity of the fuel injector and pressure fluctuations in the combustor.

Specifically, they found that pressure fluctuations in the combustor cause flow velocity fluctuations in the fuel injector. This results in the periodic ignition of the unburnt fuel and oxidizer mixture, resulting in significant changes in the ignition location and the heat release rate in the combustor. These heat release rate fluctuations then drive pressure fluctuations in the combustor, which begin to significantly affect the heat release rate fluctuations, by causing flow velocity fluctuations. As this feedback continues, the researchers found that the heat release fluctuations and pressure fluctuations become highly synchronized.

Gotoda’s study examined a rocket engine with a cylindrical combustor with an off-centre coaxial fuel injector. Fuel flows into the combustor from the outer part of the coaxial injector, while the oxidizer flows from the inner part. The researchers found that thermoacoustic power source clusters developed in the hydrodynamic shear layer region between the oxidizer and fuel. These would form, suddenly collapse and then re-emerge upstream, leading to oscillations in combustion. This repeated formation and collapse was found to play an important role in driving the combustion oscillations.

The researchers believe that their analysis method will lead to a better understanding of the mechanisms behind the formation of combustion oscillations. “The availability of the presented time-series analysis should be shown for various types of combustors,” Gotoda tells Physics World. “The findings obtained in this study shed light on a better understanding of physical mechanism on high-frequency combustion oscillations, and contribute to the academic systemization of nonlinear problems in the field of aerospace engineering and related nonlinear science.”

Torque-driven phase separation of self-propelled particles has been discovered

A new kind of phase separation that occurs when self-propelled particles are subject to torque has been discovered by researchers in South Korea and the US. Steve Granick and his team at the Ulsan National Institute of Science and Technology have pioneered the use of synthetic “Janus” nanoparticles to study the phase behaviour of nonequilibrium systems. Teaming up with theorists at Princeton University, they show in a paper published in Nature Physics, that the phase separation of these particles is driven by orientational ordering, making it distinct from other nonequilibrium systems.

Systems of self-propelled – or “active” – particles have a unique phase behaviour because they exist out of equilibrium. Perhaps the most intriguing of these behaviours is that active particles can phase separate into a liquid and gas phase even if the particles repel each other, in a phenomenon called motility induced phase separation (MIPS). As Granick describes, “It’s like if cars steered toward crowded areas and made the crowd even bigger without attracting each other”. Active matter research was transformed by the discovery of MIPS, but Granick’s team have found that it is not the only way for active particles to phase separate.

Janus particles: a model non equilibrium system

The team in Korea has spent years studying active matter using Janus particles in a liquid solvent. These are silica spheres a few microns in size with a metal coating on one hemisphere. In an alternating electric field, the two hemispheres polarize, inducing ionic flows in the solvent which, because they are unbalanced, propel the particle forwards.

They found that systems of Janus particles do indeed phase separate, but the underlying mechanism is not MIPS. Particles undergoing MIPS become jammed into position in the liquid phase, whereas the system they observed was dynamic, with particles moving quickly between the gas and liquid.

“Active chains” hold key to phase separation

Janus particle researchers

A clue to the origin of this novel behaviour is the presence of transient chains of Janus particles. This indicates that phase separation is driven by torque, and the researchers believe that the same mechanism that propels the particles is also behind this orientational ordering.

When the electric field is turned on, the two hemispheres of the Janus particles polarize, which induces two off-centre dipoles in the particles. The dipoles are repulsive, but if one particle is surrounded by others, the overall torque interaction favours dipole alignment along the direction of motion.

The researchers developed a model of the system that incorporates this torque and they found that it causes the particles to orient in the direction of the density gradient, which leads to phase separation.

This is distinct to MIPS, which happens because for a group of moving particles, it is more likely to find one particle in front of another particle, rather than behind it. This asymmetry causes the particles to slow down and cluster. As Ricard Alert, who developed the theory at Princeton University says, “This finding brings a new idea into the field, showing that not only forces but also torques can produce condensation and liquid-gas phase separation”.

The research also shows that it is possible to have phases of matter without orientational order even if the inter-particle interactions favour alignment. Because the phase separation happens without the particles becoming jammed together as in MIPS, the particles can move freely, and despite the formation of short-lived chains, the liquid clusters show no internal order over long timescales.

Janus particles in action

This microscope video shows a cluster of self-propelled particles, colour-coded according to their distance from the center. The particles move rapidly through the cluster as some particles leave it and new particles join it. Video courtesy of Jie Zhang, Ricard Alert, Jing Yan, Ned S Wingreen and Steve Granick.

Optical–ultrasound technology boosts thyroid cancer screening

Thyroid nodules – small lumps that form within the thyroid gland – are relatively common, particularly among women. Most are harmless, but a small percentage of such nodules are cancerous. Currently, preliminary screening of thyroid nodules is performed by physical examination aided by ultrasound imaging and a biopsy. However, existing ultrasound procedures for assessing thyroid nodules suffer from low sensitivity and specificity. This lack of effectiveness could impact the screening results, leading to inaccurate diagnosis, missed cancers or false positives that may result in unnecessary surgeries.

LUCA – a beam of hope

To address this problem, a team of multidisciplinary scientists has created LUCA: a laser and ultrasound co-analyser for thyroid nodules. This innovative technology, developed by scientists in the LUCA consortium and coordinated by the Institute of Photonic Sciences (ICFO) in Barcelona, aims to provide enhanced information during thyroid screening, resulting in better diagnosis and improved patient care. The group is developing a simple, low-cost, multimodal device that combines the use of near-infrared light and medical ultrasound to improve the screening of thyroid nodules for cancer.

LUCA consortium

The LUCA device, described in Biomedical Optics Express, combines two photonic technologies – near-infrared time-resolved spectroscopy (TRS) and diffusion correlation spectroscopy (DCS) – with multifunctional ultrasound imaging, eliminating the need for biopsy. What distinguishes the LUCA device from conventional ultrasound systems is that in addition to the ultrasound examination, which provides anatomical information, LUCA also provides physiological information.

Firstly, the device measures the optical properties of the underlying thyroid tissue using the TRS module, which utilizes short laser pulses (roughly 100 ps) of varying wavelength. In tests on a healthy volunteer, the team was able to acquire quantitative information regarding physiological and cellular structures in the thyroid. Secondly, the DCS module, which uses a continuous-wave laser source to illuminate the tissues, measures the intensity of blood flow. By combining these two modules, the researchers can obtain complementary data on tissue haemodynamics, oxygen metabolism and structure – information that could reduce the uncertainty in the diagnosis of thyroid nodules.

LUCA’s precision test

To ensure the alignment of the different technologies within the LUCA prototype, the team created a unique multimodal probe. The probe includes both a standard ultrasound transducer and optical fibres, to allow simultaneous acquisition of the optical and ultrasound signals. The system also includes an interactive display system incorporating several functionalities.

To verify the precision of LUCA in the reproducibility of data, the team monitored the thyroid lobe of a healthy volunteer, performing five independent measurements of the same lobe on four days over the course of two weeks. The average scanning time was approximately one minute for each measurement. The researchers observed slight variations in DCS and TRS values between repeated measurements. However, they believe that the variations are inconsequential, given the high quality of ultrasound images acquired by the device.

Specifically, the device could determine haemodynamic properties with a precision of better than 3% in a single measurement and an image reproducibility outstripping 10% during in vivo measurements over several days.

Towards clinical acceptance

Senior author Turgut Durduran, head of the medical optics group at ICFO, explains that for LUCA to be accepted for clinical use, it must first be standardized through calibration and quality assurance procedures. In establishing the clinical usability of the LUCA device, the team also validated the TRS and DCS through independent phantom tests. They used a 32-cm head-and-neck phantom that mimics the human anatomical structure to test the linearity, accuracy and reproducibility of the LUCA device. Tests of the TRS module using this phantom validated its suitability for in vivo studies.

Meanwhile, the team tested the DCS module, which measures haemodynamic parameters, by using it to measure the Brownian diffusion coefficient of a liquid phantom. The capacity of the LUCA device to accurately distinguish the concentration properties in the phantom further demonstrated its clinical usability.

Durduran says that the team is now using the device in a clinical environment and has tested it on 18 healthy volunteers and diagnosed thyroid nodules in 47 patients. The study revealed the potential of the LUCA device for identifying nodules as benign or malignant. The researchers note that these nodules were originally depicted as indistinct cases using conventional ultrasound screening.

Currently, the team is striving to improve the precision and accuracy of the optical data analysis to enhance disease detection and prognosis. The researchers believe that the LUCA device provides an innovative tool with the potential to reduce costs related to thyroid cancer screening, efficiently diagnose thyroid cancer nodules, and improve the thyroid cancer screening process in comparison with conventional ultrasound.

Not everything that can happen does happen – reformulating physics as laws about the impossible

If a 1 kg mass is dropped from a height of 100 m, what is its velocity when it hits the ground? My eldest daughter is currently grappling with such thorny questions in her physics lessons, but one answer she is not expected to give is: when, exactly, did this happen? It’s a purely hypothetical scenario, in which we freely change the details (what if that mass were 2 kg?).

But in her provocative new book The Science of Can and Can’t, Chiara Marletto, a physicist at the University of Oxford, looks again at how physics treats these hypothetical scenarios. The “traditional conception of fundamental physics”, she says, has no room for such hypotheticals, nor for counterfactual, alternative versions of “what actually happens”. While abstract laws of physics (such as Newton’s) are all very well, a widespread view is that they operate in conjunction with unique and specific initial conditions to create a universe in which only one inevitable thing ever happens at each moment. (This was exemplified in some responses to my recent Physics World article on free will (January 2021).)

However odd it might seem at first encounter, this traditional, reductionistic view seems to insist that anything that doesn’t actually happen must be impossible. Yet “there are questions that this approach cannot answer”, Marletto argues – “questions that are deep and important for understanding the full reality of a physical phenomenon”. She offers a delightful metaphor. Suppose we ask what the purpose is of the little rowing boats attached to a ship. Why, they are lifeboats to be used in the event of sinking, of course. But if the ship never sinks over all its working life, so they are never used, were they ever truly lifeboats? Their function can be defined only in terms of a counterfactual scenario – an alternative to the observed reality. Yet surely that’s still the right answer.

Marletto has developed this science of counterfactuals – of what “can and can’t” be – in collaboration with fellow Oxford physicist and author David Deutsch, for whom it supplies a central plank of his “constructor theory”. That theory is an attempt to reformulate laws of physics in terms of fundamental statements about what is and is not possible. As Deutsch has put it: “This central role for the impossible is not only a formal implementation of the Popperian idea that the content of a scientific theory is in what it forbids. It is also an important difference between the constructor-theoretic conception of the physical world and the prevailing one: what actually happens is seen as an emergent consequence of what could happen, rather than vice-versa.”

In other words, we currently deduce general laws on the basis of specific observations of things that do happen, but perhaps it would be more fruitful to understand observations as consequences of fundamental laws about what can and can’t occur in principle.

In fact, some quantum theorists are attempting to do precisely that. They try to formulate quantum phenomena such as entanglement in terms of so-called “no-go theorems”: statements about what is impossible. One such principle is quantum no-cloning: it is impossible to make an identical copy of an unknown quantum state. Remarkably, from such statements one can recover all the familiar tenets of quantum mechanics, such as superposition and Heisenberg’s uncertainty principle.

At root, these theorems are statements about the allowed transformations of quantum information. This is potentially one of the most fertile applications of the counterfactual approach, although it is a little disappointing that Marletto does not explicitly make the connection between the route she and Deutsch are taking here, and what others are doing in this field of “quantum reconstructions”.

More generally, counterfactuals might help to clarify the emerging links between physical theory and information. This puzzle goes back at least to Maxwell’s demon (oddly absent here); Erwin Schrödinger’s musings on life and thermodynamics; and the developing nexus of information and computation theory, cognition and biology. The information-carrying potential of a system like a microchip or DNA, Marletto explains, comes from a counterfactual property: they could be in other states than they are, and we can only interpret their state as informational with reference to those counterfactuals.

The “science of can and can’t” is tremendously ambitious, seeking to reformulate quantum and classical physics

The programme of the “science of can and can’t”, or constructor theory, is thus tremendously ambitious, seeking a reformulation of quantum and classical physics, the theory of computation, emergence and complexity, and more. It points to the enticing notion of a “universal constructor”. By analogy with Turing’s universal computer, which “can be programmed to perform any calculation that is physically allowed”, a universal constructor could enact all transformations that are physically possible. I think these ideas are onto something – but only because they seem to be steering towards much the same goal as a lot of other work coming along different routes. So it’s frustrating that Marletto overlooks the connections.

Her argument here is also somewhat undermined by a littering of sloppy claims. It’s really no longer good enough to see organisms described as readouts of genes, each “coding for a different trait”, nor to see genes portrayed as “replicators”. The assertions that art “advances” via a Popperian correction of errors, or that natural selection “cannot perform jumps” and can “stagnate” to produce mass extinctions, don’t inspire confidence. We get the old, flawed story about Copernicanism resulting from an “irremediable clash” of geocentrism with observation, and the misleading suggestion that memory entails the mere copying of signals from the environment into the brain for later retrieval, as if in a filing cabinet.

Meanwhile, some readers may be as puzzled as I was by long passages stating the obvious in a laboured fashion without any indication of what question is supposedly being addressed. There are also too few examples of concrete advantages the counterfactual perspective brings; without more, Marletto’s claim to use “can and can’t” to produce a “theory of knowledge” seems little more than a redefinition of knowledge to fit the theory. The book exemplifies a common flaw of works that expound the author’s pet theory: a failure to capitalize on, or even to recognize, what other viewpoints could contribute to it.

Do counterfactuals, though, clarify free will? “We do not yet know how to accommodate exactly free will in physics,” Marletto writes, “but that only means we have to think harder.” I look forward to the thinking.

  • 2021 Allen Lane £20hb 272pp

Electrons ‘surf’ on Alfvén waves in plasma-chamber experiments

For the first time, experiments have clearly shown how powerful Alfvén waves in the Earth’s magnetosphere transfer their energy to electrons that then cause intense episodes of the Northern and Southern Lights. The work was done in the US by James Schroeder at Wheaton College in Illinois and colleagues at the University of Iowa, University of California, Los Angeles and the Space Science Institute. They did scaled-down experiments in a special plasma chamber to study electrons as they “surfed” along Alfvén waves.

As it reaches far into space, Earth’s magnetic field becomes highly distorted by the solar wind – a stream of charged particles emitted by the Sun. On the side facing away from the Sun, its field lines extend out to enormous distances, forming a vast magnetic tail. During violent solar flares and coronal mass ejections, disruptions to the solar wind can cause field lines in this magnetotail to stretch out, and eventually break and pinch together through magnetic reconnection.

Like firing a slingshot or catapult, these events release vast amounts of magnetic energy – launching powerful waves of oscillating ions called Alfvén waves, which travel back towards Earth along magnetic field lines. As the waves encounter electrons travelling at about the same speed and in the same direction, a mechanism called Landau damping causes these electrons to be driven along by the Alfvén waves.

Violent collisions

Through this mechanism for energy transfer, electrons can be accelerated to speeds of up to 20,000 km/s as they approach Earth. At this point, Earth’s magnetic field passes through the thin upper atmosphere overlying the polar regions. Here, surfing electrons collide violently with atoms and molecules causing them to emit light. As a result, these electrons are responsible for creating particularly vibrant auroras.

To study this acceleration process, spacecraft and sounding rockets have been sent close to the point of reconnection in the magnetotail. So far, however, these efforts have been unable to directly measure the energy transfer between Alfvén waves and auroral electrons.

Now, Schroeder’s team has recreated this process using the Large Plasma Device (LAPD) at UCLA’s Basic Plasma Science Facility. Within LAPD, the team generated a scaled-down version of the newly recombined magnetotail by firing Alfvén waves down the length of the LAPD’s 20 m-long plasma chamber. Subsequently, a small fraction of plasma electrons was accelerated by the waves.

Velocity distribution

By measuring the velocity distribution of the accelerated electrons, Schroeder and colleagues could study the acceleration process. To further understand their experiments, the team compared their results with both numerical simulations and mathematical models of Landau damping within the LAPD chamber. As they hoped,  the simulations closely agreed with their experimental observations.

For the first time, their results clearly demonstrated a direct transfer of energy between Alfvén waves, and the high-speed electrons partially responsible for generating the aurora.

The research is described in Nature Communications.

Nanoscale clock hints at universal limits to measuring time

Imagine the sound of a ticking clock. How much time passes between each tick? For a good clock, the answer should be one second, to some precision. If we want to make the clock more precise, the laws of thermodynamics dictate that we must put in more work – and the amount of waste heat dissipated to the surroundings must increase to compensate for the more highly-ordered ticks. Ultimately this leads us to a surprising, but inescapable result: the better we make our clock, the more we increase the disorder, or entropy, of the universe.

In 2017, physicists showed that the accuracy of a quantum clock is directly proportional to the entropy created. Now, researchers in the UK and Austria have discovered that a similarly proportional relationship holds for a nanoscale classical clock. The work, published in Physical Review X, hints that not only does measuring time necessitate some entropy increase, but also that the exact relation between the accuracy of clock ticks and the entropy produced may be a universal aspect of timekeeping. The result could have implications for nanoscale heat engines, which operate in a similar way to clocks, and technologies that depend on accurate timekeeping.

Synchronizing the quantum and the classical

Testing the link between accuracy and entropy is hard in a classical system because it is so difficult to keep track of the transfer of heat and work. To overcome this barrier, Natalia Ares and her team at the University of Oxford worked with colleagues at Lancaster University, the Institute for Quantum Optics and Quantum Information, and the Vienna Center for Quantum Science and Technology to create a simple optomechanical “nano-clock” consisting of a membrane driven by an electric field. The displacement of the membrane as it wiggles up and down is recorded, and each wiggle is counted as a clock tick. The clock has a useful output – a train of ticks – at the expense of increasing the disorder of its environment by heating the circuit connected to the membrane.

This nanoscale system was too large to be analysed quantum mechanically, and it was also physically entirely different to the quantum clocks studied previously. Nevertheless, the researchers found the same type of relation between accuracy and entropy as in the quantum clocks. The relation between clock accuracy and waste entropy in the experiment is also consistent with the researchers’ theoretical model, confirming that the pattern can hold for classical models as well as quantum ones.

Redefining optimal

Although the researchers only tested the relation between entropy and accuracy for one specific implementation of a classical clock, they claim that the similarity to the quantum result suggests that it may hold true for any clock. The researchers also suggest redefining an “optimal” clock as being one that has the highest possible accuracy with the least entropy dissipation (according to the discovered relation between accuracy and entropy), independent of the clock’s physical details.

Writing on the Oxford Science Blog, Ares suggests that the relation between accuracy and entropy “might be used to further our understanding of the nature of time, and related limitations in nanoscale engine efficiency”. One major open question concerns how the so-called “arrow of time” manifests itself on small scales and quantum scales. The arrow of time is often defined as being the direction in which entropy increases; inversely, all clocks use entropy increase in some form to quantify the passage of time. A better understanding of the measurement of time could thus provide new insights into how heat, work and time’s arrow are connected.

Caution needed when testing Einstein’s general relativity using gravitational waves

Physicists should be wary of data from gravitational-wave observatories that appear to contradict Einstein’s general theory of relativity. That is the message from researchers in the UK, who have analysed how errors accumulate when combining the results from multiple black-hole mergers. They say that current gravitational-wave catalogues contain nearly enough events to potentially generate errors large enough to be confused with signals for alternative theories of gravity.

The discovery of gravitational waves by the LIGO collaboration in the US in 2015 was one of the most important vindications of Einstein’s general theory of relativity. That theory, formulated a century earlier, predicts that massive, accelerating objects will generate wave-like distortions in space-time that radiate away from them. The waves are miniscule, but LIGO and other laser interferometers are now sensitive enough to pick up the distinctive waveforms from certain pairs of massive celestial objects. In the case of the first detection and most others since, the objects in question were two merging black holes.

Ironically, however, physicists also hope that gravitational waves might reveal flaws in general relativity. They strongly suspect that the theory does not provide a complete description of gravitational interactions, given its incompatibility with quantum mechanics. To this end, researchers make detailed comparisons between the waveforms of gravitational radiation picked up by interferometers and those predicted by general relativity – with any inconsistencies between the two signaling a possible hole in the theory.

As Christopher Moore and colleagues at the University of Birmingham point out, all detections to date have been consistent with general relativity. But the scrutiny will intensify as LIGO and its European counterpart – the Virgo detector in Italy – become more sensitive, and other observatories start up elsewhere. Indeed, it might become possible to identify features in the observed waveforms that discriminate between general relativity and alternative theories, such as ones motivated by quantum gravity.

Combining data

Doing so with individual events is limited by the strength of the signal in each case. But as the number of events increases – to date there have been about 50 binary systems observed – researchers are looking to combine the data from them and thereby perform more stringent tests.

In the latest work, Moore and colleagues sought to establish the possible extent of systematic errors when such multi-event analyses are carried out. Their results, they say, surprised them – finding that small model errors can accumulate faster than expected when combining events together in catalogues.

As the researchers explain in a paper published in the journal iScience, modelling the waveforms from specific celestial phenomena is a complex business. As such, they say, several simplifications have to be imposed to make the calculations manageable. These include the removal of higher-order mathematical terms and the need to ignore certain physical effects – such as those deriving from black holes’ spin and orbital eccentricity. Even then, they say, finite computing power limits the calculations’ accuracy.

Additional parameters

Using a simple method of data analysis that assumes the signal-to-noise ratio is very large, Moore and colleagues found that the extent of error accumulation depends on how individual gravitational-wave events are combined. In other words, how additional parameters are added to the equations of general relativity. On one hand, are parameters that would be common to all events, such as the mass of the hypothetical force particle known as the graviton. On the other hand, are parameters whose values can change from one event to the next – such as “hairs” on black holes.

In addition, say the researchers, error accumulation depends on how modelling errors are distributed across catalogue events and how they align with different deviations from relativity – whether they always tend to push the deviation in the same direction or whether they instead cause it to average out.

“Dangerously close”

Moore and colleagues conclude that even if a waveform model is good enough to analyse individual events, it may still create erroneous evidence for physics beyond general relativity “with arbitrarily high confidence” when applied to a large catalogue. In particular, they calculate that such a false signal could emerge from as few as between 10–30 events with a signal-to-noise ratio of at least 20. That, they write, “is dangerously close to the size of current catalogues”.

The researchers acknowledge that more work needs to be done to gauge the reliability of such multiple-event analyses. In particular, they say it will be necessary to test the statistical procedures involved with simulated as opposed to real data.

“Excellent starting point”

Other scientists welcome the new research. Nicolás Yunes of the University of Illinois Urbana-Champaign in the US says the problem of mistaking errors for new physics has been known about for some time but reckons the study represents “an excellent starting point to continue investigating this potential problem and determine how to overcome it”.

Katerina Chatziioannou of the California Institute of Technology in the US argues that although waveform models are “good enough” for existing gravitational-wave data it remains to be seen whether they are up to the job in the future. Nevertheless, she says, researchers are “actively working to improve the models”.

Indeed, Emanuele Berti of the Johns Hopkins University, also in the US, is optimistic that a “self-correcting process” will take place. “As we learn waveforms and astrophysical properties of the events,” he says, “we should be able to correct for the effects pointed out in the paper.”

Solving the proton puzzle

Randolf Pohl was not in a good mood, late one evening in July 2009. Sitting in a control room at the Paul Scherrer Institute (PSI) in Switzerland, he was cursing the data from a project he’d embarked on over a decade earlier. Known as the Charge Radius Experiment with Muonic Atoms (CREMA), it was designed to measure the radius of the proton more precisely than ever before. Technically very demanding, CREMA involved firing a laser beam at hydrogen atoms in which the electron had been replaced by its heavier cousin, the muon.

Pohl, who was then a postdoc at the Max Planck Institute of Quantum Optics in Garching, Germany, was trying to tweak the laser until its energy was just enough to excite these muonic hydrogen atoms from the 2S1/2 to the 2P1/2 levels. Quantum theory says that the energy difference between the levels, known as the Lamb shift, should depend ever so slightly on the size of the proton. The idea was to detect the X-rays emitted when the atoms were excited by the laser and decayed to a much lower energy level. The precise laser frequency at that point, combined with atomic theory, would then reveal the proton radius.

Unfortunately, having scanned what they thought was the entire frequency range corresponding to all possible radii, Pohl and his colleagues had come up empty-handed. There was no X-ray signal in sight. But then team member Aldo Antognini, who turned up for the night shift, had an inspired idea. He suggested looking at a range of frequencies seemingly ruled out by previous experiments, that CREMA had yet to explore. With potentially just a week of observation time left, the researchers quickly re-adjusted their equipment.

Aldo Antognini

Amazingly, a signal emerged. Clearly visible above the background, it indicated that the proton’s radius is 0.84184 ± 0.00067 fm (where 1 fm = 10–15 m). That made it nearly 4% smaller than the then official value set in 2006 by the Committee on Data of the International Science Council (CODATA) and, given the tiny error bars, completely at odds with it.

Puzzling times

The strange discrepancy between CREMA and previous measurements became known as the “proton radius puzzle” because it involved what appeared to be two contrasting but very well founded sets of results. On the one hand was the CODATA value, calculated on the basis of data from around two dozen experiments using two techniques: electron scattering and hydrogen spectroscopy. On the other was the CREMA result – a single, very precise spectroscopy measurement for which no obvious flaws had been found, either before or after the result’s publication in 2010 (Nature 466 213).

The disparity created great excitement among theorists, who speculated that it could be due to some previously unforeseen difference in the fundamental behaviour of electrons and muons. After all, the Standard Model of particle physics says that (apart from their masses) electrons and muons are completely alike. So if CREMA’s result was right, it raised the thrilling prospect that the Standard Model might need overhauling.

But excitement began to wane when theorists failed to find a new force that could explain the discrepancy. What’s more, by 2017 new results from the two kinds of traditional proton-radius experiments started to confirm the muon data. CREMA’s famous result now appeared not as a harbinger of revolution in physics but as a wake-up call that the earlier scattering and spectroscopy measurements had gone badly wrong. More remarkable than the discrepancy, however, was the way that new measurements from those traditional techniques seemed to fall in line – as if the field underwent a collective U-turn.

CREMA lab

Wim Ubachs of Vrije Universiteit in Amsterdam, who is not part of CREMA, admits to being baffled by this “peculiar matter”, as he puts it, but stresses he has the “highest esteem for the people involved in the field and would not want to point to any wrongdoing or manipulating of data”. Pohl himself says that systematic errors in the earlier work must be to blame, though he is unable to identify the culprits. “It’s very strange that so many experiments could be wrong in the same way,” he says.

Others, however, think the mystery may not be all it’s cracked up to be. Some nuclear physicists even dispute the idea that researchers had no inkling of a smaller radius until CREMA came along. Among those is Ulf Meißner at the University of Bonn in Germany, who says he had first started arguing for a lower value for the proton radius in the mid-1990s. “There was a clear discrepancy,” he recalls. “But for whatever reason CODATA was always sitting on the high value.”

Indeed, the saga raises questions about how the values of the fundamental constants ought to be decided and what role CODATA should play. For Meißner, the decision-making is not transparent and depends too much on certain individuals’ tastes. “It is more psychology than physics,” he claims.

Narrowing the gap

CODATA was set up by the International Council for Science in 1966 to organize and preserve the increasing volumes of data produced by scientists around the world. It entrusts the delicate task of setting the values of nature’s most basic parameters, such as the Planck constant, electron mass or gravitational constant, to the Task Group on Fundamental Constants. Consisting of 15 or so scientists from around the world, the group usually adjusts those values every four years using the least-squares method to fit them as closely as possible to available experimental and theoretical results.

For most constants, this process is not ambiguous as different results agree among themselves within the limits of their error bars. That much seemed true when the task group first published an official value of the proton radius – or, more specifically, the average radius of the proton’s electric charge – in 2002. At that time, the radius was derived from the two more conventional types of experiment. Electron scattering measures the extent to which high-energy electrons are deflected by hydrogen nuclei (protons). Ordinary spectroscopy, meanwhile, compares the measured frequencies of one or more electronic transitions in hydrogen with the values predicted by quantum electrodynamics (QED) to obtain a value for the Lamb shift and with it the proton radius. The figure CODATA settled on in 2002 was 0.8750 ± 0.0068 fm, which changed only slightly four years later, remaining a little under 0.88 fm.

figure 1

But in 2010 along came CREMA with its muonic hydrogen spectroscopy measurements. The advantage of using muons is that they are about 200 times heavier than electrons and so get closer to the proton than their lighter counterparts, making the Lamb shift more pronounced. The resulting value of around 0.84 fm was far more precise than the official number, but the CODATA task group decided to stay put. It omitted the muon data from its 2010 adjustment, partly because new, improved scattering data from the MAMI accelerator at the University of Mainz in Germany agreed with the bigger radius. And four years later the group did the same thing when its members met up in Paris. Even though a number of invited speakers had argued it was getting ever harder to identify any experimental or theoretical loopholes that could explain away the CREMA results, the group voted – by a count of eight to two – to again exclude the muon data.

It was only in 2018 that the panel changed its approach. By then, several experimental teams had either published or communicated new data from conventional hydrogen spectroscopy agreeing with CREMA – including one at the Max Planck Institute (comprising Pohl) and another at York University in Toronto, Canada. With CREMA itself having published an even more precise value of the proton radius, the task group finally incorporated the muon data. The value that emerged from its best fit that year was very similar to the muonic one alone, but with bigger error bars: 0.8414 ± 0.0019 fm.

Decisions, decisions

In deciding what to do with conflicting data, the CODATA task group aims to be neutral. Peter Mohr, a physicist from the US National Institute of Standards and Technology who chaired the group from 1999 to 2007 and is still a member, explains that it incorporates all “individually credible” results and then takes an average. “[It] does not decide whether particular data are right or wrong,” he says. “This would require superhuman powers.”

However, the panel’s handling of the proton radius raises questions. Mohr says it “made less sense to average” the large and small radii in 2014 than it did four years later, given the absence of independent confirmation at that time. But why it changed tack in 2018 is not clear. According to the minutes of that year’s meeting, again held in Paris, the panel considered recent results from conventional spectroscopy to be “inconclusive” given that, alongside the support for a smaller radius, researchers at the Kastler Brossel Laboratory in Paris again obtained the higher value when measuring hydrogen’s 1S3S transition.

Mohr defends the group’s decision-making process, maintaining that the situation with the proton radius was “not all cut and dried”. He and his colleagues decided ultimately to change the value, he explains, given a “preponderance of evidence” in favour of the smaller number. But he admits that the statement about the hydrogen spectroscopy results in the 2018 minutes was “poorly worded”.

Others, however, offer a different interpretation. Despite being “close to 100% convinced” in 2014 that the small radius was correct, Pohl, who gave a talk at the meeting that year, says he nonetheless supported retaining the high value. That approach, he felt, would keep the spotlight on the proton-radius puzzle and motivate further work to try and resolve it. Indeed, Simon Thomas from the Kastler–Brossel Laboratory in Paris also thinks that CODATA wanted to keep the question alive rather than obtain the most precise possible value of the radius.

As he notes, the CREMA result was not really at odds with individual spectroscopy experiments – all but one differed by no more than 1.5 standard deviations, or σ. The only significant disparity – of at least 5 σ – arose when the conventional data were averaged and the error bars shrunk. But that disparity could only be maintained if the muon result itself was kept out of the fitting process – given how much it would otherwise shift the CODATA average towards itself.

Thomas sees nothing wrong with the task group drawing attention to the puzzle rather than simply opting for the most precise values of the constants. (Indeed, Pohl reckons “no-one cares” about the exact value of the proton radius apart from spectroscopists.) He also regards the consequent boost in research funding as a necessary and healthy part of the scientific process. “It is only when both scientific and strategic motivations coincide that they [scientists] take a stance on a result,” Thomas says

A black box

It’s debatable whether the task group has always been so enthusiastic about bringing inconsistencies to light. Indeed, some nuclear physicists reckon it did just the opposite before the muon data emerged – having ignored scattering data pointing to a smaller radius (see box below). The result, they say, was a high value of the radius that appeared more solid than it really was.

The task group’s approach did not change even after CREMA published its initial muon data. In 2014 it called on “two pairs of knowledgeable researchers” to extract a value from the scattering data. These were Ingo Sick from the University of Basel in Switzerland teaming up with John Arrington of the Argonne National Laboratory in Illinois, and Michael Distler and Jan Bernauer from the University of Mainz, Germany. With both pairs calculating a high value, the scattering radius continued to be large – about 0.88 fm.

But, as before, others had arrived at a different conclusion. Douglas Higinbotham of the Thomas Jefferson National Accelerator Facility in Virginia and colleagues showed they could use linear and other simple extrapolation of low-momentum data to work out the radius, rather than the higher-order polynomials favoured by Sick and others. Their work yielded a value consistent with CREMA – indeed, they argued that “the outliers” were not the muon or scattering results but those from ordinary (rather than muonic) hydrogen.

The task group mentioned this study and two others favouring the smaller radius in its 2014 report, but dismissed them on the basis of a critical analysis by Bernauer and Distler. That pair identified what it claimed were “common pitfalls and misconceptions” in the other groups’ statistical analyses, arguing in the case of Higinbotham and co-workers that they had misunderstood tests used to justify lower-order extrapolations.

Higinbotham sees things differently. He maintains that the use of extrapolations based on higher-order polynomials “makes no mathematical sense”, adding that nuclear theorists have in fact been using dispersion relations to obtain a low value of the radius since the 1970s. But he complains that no-one from the small-radius camp was interviewed by CODATA in 2014. “It can be tough when you feel you are not getting the opportunity to discuss your point of view,” he says.

Meißner compares the task group’s approach unfavourably with that adopted in particle physics, where the international Particle Data Group, he says, lists all competing values of constants and lets users make up their own minds. CODATA’s decision-making process is, he argues, more “like a black box”, with the panel including or excluding data on the basis of “personal decision” rather than objective criteria. Indeed, Mohr has “to confess to not remembering” why the group relied so heavily on Sick’s analysis (with no explanation appearing in its various reports).

As to the use of Bernauer and Distler’s critical analysis in 2014, he says the task group was presumably “persuaded by their arguments”. But why no explanation of that choice? “That becomes a political question I guess,” he replies. “Evidently we trusted those people but to go into details about why we trusted them I can’t give concrete arguments.”

Scattered doubts

Illustration of calipers measuring a proton

As a technique for determining the proton radius, electron scattering predates spectroscopy by several decades. First carried out in the 1950s, it initially gave atomic physicists a value for the radius so they could test QED by comparing theory with measured transition frequencies. Unfortunately, scattering is tricky to carry out and interpret. Experimentalists have to measure the number of deflected electrons as a function of the scattering angle, but the most accurate value for the proton radius comes when there is no deflection (i.e. when electrons transfer zero momentum to the protons). To obtain this physically impossible data point, physicists plot a mathematical function known as the “electric form factor” and extrapolate it back to zero momentum transfer. But that’s a contentious process, yielding a range of values for the radius even from a single experiment.

Back in its 1998 adjustment, CODATA’s Task Group on Fundamental Constants considered two scattering-derived radii. One (0.862 fm) came from low-momentum data at the University of Mainz’s MAMI accelerator in 1980, while the other (0.847 fm) was published in 1995 by Ulf Meißner and two theorists at Mainz using “dispersion relations” to analyse all existing scattering data. Giving equal weight to both numbers, the group ended up with a simple unweighted mean of 0.855 fm for the proton radius. But as this figure conflicted with the value from spectroscopy, CODATA based the radius that year solely on the results from atomic physics – and as a result didn’t formally list the proton radius among its fundamental constants.

In 2002 the task group instead turned to Ingo Sick of the University of Basel, described by Mainz group member Michael Distler as “the pope of scattering”. His global scattering analysis yielded a much higher value – 0.895 fm – which was no longer in conflict with spectroscopic data. The panel that year therefore published the combined radius, and went on to do so in both 2006 and 2010 – again relying on Sick’s number, citing both his original study and several follow-up papers. But in none of those three adjustments did the task group reference Meißner’s work. Meißner, who says he has never been told why his work was excluded, maintains that the group in effect took sides by abandoning its usual practice of averaging different results. “In this they certainly have not been neutral whatsoever,” he says. “If they are neutral they should quote whatever is around and not select.”

End game

Whether a different treatment of the scattering data could have ushered in a smaller proton radius before CREMA found its signal, or at least raised the uncertainty of the large value, is a moot point. Distler argues that the data simply weren’t precise enough to clearly distinguish between two different sizes, while Meißner thinks the discrepancy would have been less pronounced come 2010 but cautions that his view is “pure speculation”.

The task group itself noted in its 2010 report that, with the scattering data removed, the discrepancy between conventional and muonic results dropped from a whopping 7 σ to just 4.4 σ. Under those circumstances, Thomas reckons that the disparity would still have raised the same questions about a possible crack in the Standard Model, but would have generated much less media coverage. “I think there would still have been a puzzle, only a little less famous,” he says.

Scientists naturally continue searching for systematic errors when a result disagrees with what they expect but otherwise might not try quite so hard

As to what caused the presumed but still unidentified errors in conventional spectroscopy, Thomas suggests that scientists naturally continue searching for systematic errors when a result disagrees with what they expect, but otherwise might not try quite so hard. Or, in the words of Jean-Philippe Karr, a fellow atomic physicist at the Kastler–Brossel lab, “People are serious and continue looking, but maybe they are a bit less motivated when they are at the right value, so to speak.”

To try to work out what might have pushed up the result from their 1S3S measurement, Thomas and colleagues are studying the same transition in deuterium, which could reveal any missed systematic errors due to atoms’ finite velocities. Similar work is also being carried out by the group at the Max Planck Institute, which last year reported very precise measurements of the transition in hydrogen using a frequency comb – again obtaining the small proton radius.

Nuclear physicists too are busy. Researchers at the Jefferson lab are currently preparing an upgrade to the Proton Radius Experiment, having in 2019 reported a small radius in 3 σ disagreement with results from other scattering efforts. The MUSE experiment at the PSI, meanwhile, is preparing to scatter both electrons and muons off protons to establish whether the two types of lepton might behave differently after all.

In fact, there are some who continue to argue for a large proton radius. Distler, whose colleagues at Mainz are also planning fresh experiments with lower systematic errors, insists that the group’s existing results are valid. He maintains that the mismatch with the muonic data might only be apparent, and that it could be due to incomplete hydrogen energy-level corrections from QED – stemming perhaps from the production of multiple particle–antiparticle pairs. “It would mean we have two values of the radius and both would be right,” he says.

Meißner, in contrast, has no doubt that the small radius is correct. He says he doesn’t want to take anything away from the CREMA researchers, describing their experiment as “really high accuracy and great stuff”. But he resents what he regards as the lack of credit given for his work beforehand, while admitting that he can’t discuss the proton radius anymore with his counterparts at Mainz. “They can take the large value of the radius to the grave,” he says. “I don’t care.”

Why sticky baseballs follow a greater curve, beer mats make poor Frisbees

This edition of the Red Folder focuses on spinning objects flying through the air – and first up is the baseball.

Just like in particle physics, spin plays a crucial role in how a baseball is pitched. If the ball has little or no spin when it is released by the pitcher, its flight to the batter will be erratic. Called a knuckleball, this type of pitch can be very difficult for a batter to hit. If the ball has lots of spin, it will travel in a smoother trajectory, but the interaction between the air and the stitched seams of the ball will cause its motion to curve. These curveballs can also be difficult to hit.

Now, a kerfuffle has broken out in Major League Baseball about pitchers using sticky substances on their hands to increase the spin of the ball. According to the sports physicist John Eric Goff of the University of Lynchburg in the US, major league pitchers appear to have used this trick to increase the spins of their pitches by about 400 rpm. This is a significant boost of about 17% to a typical curveball that spins at about 2400 rpm.

Goff calculates that this can result in an extra displacement of about 5 cm when the ball reaches the batter – which also happens to be the thickness of a baseball bat. As a result, experienced batters are misjudging curveballs. This, Goff speculates, could be the reason why major league batters are striking out 25% of the time today, compared to just 17% of the time in 2005.

You can read more in an article by Goff in The Conversation.

Cardboard coasters

If you’d rather watch baseball than play it, you could find yourself sitting in a pub full of beer mats – those cardboard coasters that often stick to the bottom of your glass. During a lull in the game, you might even be tempted to flick your beer mat across the room. But unlike a Frisbee flying disc, beer mats will flip over as they spin through the air – and now three physicists at the University of Bonn in Germany have worked out why.

Using a combination of computer simulations and experiments, Johann Ostmeyer, Christoph Schürmann and Carsten Urbach found that a lifting force acts on the forward edge of a spinning disc, causing it to flip over. Frisbees don’t suffer from this instability because of their thick edges.

The trio describe their study in a preprint on arXiv.

Copyright © 2026 by IOP Publishing Ltd and individual contributors