Skip to main content

RSNA 2020: AI highlights from an all-virtual annual meeting

RSNA 2020, the annual meeting of the Radiological Society of North America, showcases the latest research advances and product developments in all areas of radiology. Here’s a selection of studies presented at this year’s all-virtual event, all of which demonstrate the increasingly prevalent role played by artificial intelligence (AI) techniques in diagnostic imaging applications

Deep-learning model helps detect TB

Early diagnosis of tuberculosis (TB) is crucial to enable effective treatments, but this can prove challenging for resource-poor countries with a shortage of radiologists. To address this obstacle, Po-Chih Kuo, from Massachusetts Institute of Technology, and colleagues have developed a deep-learning-based TB detection model. The model, called TBShoNet, analyses photographs of chest X-rays taken by a phone camera.

Deep-learning diagnosis

The researchers used three public datasets for model pre-training, transferring and evaluation. They pretrained the neural network on a database containing 250,044 chest X-rays with 14 pulmonary labels, which did not include TB. The model was then recalibrated for chest X-ray photographs by using simulation methods to augment the dataset. Finally, the team built TBShoNet by connecting the pretrained model to an additional 2-layer neural network trained on augmented chest X-ray images (50 TB; 80 normal).

To test the model’s performance, the researcher used 662 chest X-ray photographs (336 TB; 326 normal) taken by five different phones. TBShoNet demonstrated an AUC of 0.89 for TB detection. With optimal cut-off, its sensitivity and specificity for TB classification were 81% and 84%, respectively.

The team conclude that TBShoNet provides a method to develop an algorithm that can be deployed on phones to assist healthcare providers in areas where radiologists and high-resolution digital images are unavailable. “We need to extend the opportunities around medical artificial intelligence to resource limited settings,” says Kuo.

Algorithm predicts breast cancer risk

Researchers at Massachusetts General Hospital (MGH) have developed a deep-learning algorithm that predicts a patient’s risk of developing breast cancer using mammographic image biomarkers alone. The new model can predict risk with greater accuracy than traditional risk-assessment tools.

Analysing mammograms

Existing risk-assessment models analyse patient data (such as family history, prior breast biopsies, and hormonal and reproductive history) plus a single feature from the screening mammogram: breast density. But every mammogram contains unique imaging biomarkers that are highly predictive of future cancer risk. The new algorithm is able to use all of these subtle imaging biomarkers to predict a woman’s future risk for breast cancer.

“Traditional risk-assessment models do not leverage the level of detail that is contained within a mammogram,” says Leslie Lamb, breast radiologist at MGH. “Even the best existing traditional risk models may separate sub-groups of patients but are not as precise on the individual level.”

The team developed the algorithm using breast cancer screening data from a population including women with a history of breast cancer, implants or prior biopsies. The dataset included 245,753 consecutive 2D digital bilateral screening mammograms performed in 80,818 patients. From these, 210,819 exams were used for training, 25,644 for testing and 9290 for validation.

The researchers compared the accuracy of their deep-learning image-only model to that of a commercial risk-assessment model (based on clinical history and breast density) in predicting future breast cancer within five years of the mammogram. The deep-learning model achieved a predictive rate of 0.71, significantly outperforming the traditional risk model’s a rate of 0.61.

Eye exam could provide early diagnosis of Parkinson’s disease

A simple non-invasive eye exam combined with machine-learning networks could provide early diagnosis of Parkinson’s disease, according to research from a team at the University of Florida.

Parkinson’s disease, a progressive disorder of the central nervous system, is difficult to diagnose at an early stage. Patients usually only develop symptoms – such as tremors, muscle stiffness and impaired balance – after the disease has progressed and significant injury to dopamine brain neurons has occurred.

Fundus eye image

The degradation of these nerve cells leads to thinning of the retina walls and retinal microvasculature. With this in mind, the researchers are using machine learning to analyse images of the fundus (the back surface of the eye opposite the lens) to detect early indicators of Parkinson’s disease. They note that these fundus images can be taken using basic equipment commonly available in eye clinics, or even captured by a smartphone with a special lens.

Using datasets of fundus images recorded from patients with Parkinson’s disease and age- and gender-matched controls, the researchers trained support vector machine (SVM) classifying networks to detect signs of disease on the images. They employed a machine-learning network called U-Net to select blood vessels from the fundus image, and used the resulting vessel maps as inputs to the SVM classifier. The team showed that these machine-learning networks could classify Parkinson’s disease based on retina vasculature, with the key features being smaller blood vessels.

“The single most important finding of this study was that a brain disease was diagnosed with a basic picture of the eye. You can have it done in less than a minute, and the cost of the equipment is much less than a CT or MRI machine,” says study lead author Maximillian Diaz. “If we can make this a yearly screening, then the hope is that we can catch more cases sooner, which can help us better understand the disease and find a cure or a way to slow the progression.”

Science must listen to opposing views

This year’s Nobel Prize for Physics celebrates a huge achievement in astronomy. Andrea Ghez from the University of California, Los Angeles, and Reinhard Genzel from the Max Planck Institute for Extraterrestrial Physics in Garching, Germany, shared half the prize for providing the first conclusive evidence of a supermassive black hole at the centre of the Milky Way. The other half went to the University of Oxford mathematical physicist Roger Penrose for his theoretical work on the origin of black holes.

Though Ghez and Genzel’s achievements are obviously warranted, one cannot – and should not – ignore the controversies surrounding the observatories that were so critical to the discovery. Every article I read about this year’s Nobel prize failed to even hint at the debate surrounding the observatories that were key to the work. Genzel used telescopes in Chile, while Ghez worked on the W M Keck observatory on Maunakea – the most sacred place in the Hawaiian Islands.

Maunakea is short for Mauna a Wakea or “Mountain of Wakea” – Wakea being one of the progenitors of the Hawaiian people. It is the home of divine deities of the Hawaiian people and is the burial ground and embodiment of ancestors, including high-ranking chiefs and priests. To many Hawaiians and Hawaiian cultural organizations, the observatories on Maunakea are sacrilegious, destroying Hawaiian family shrines.

Currently the island is home to 13 observatories, the existence of which is due to, and benefits from, the colonization of Hawaii. In 1898 the US annexed the Hawaiian Islands and made them into its territory. The Republic of Hawaii was created by Western settlers who led a rebellion against the queen regent of the Hawaiian Kingdom. The annexation of these lands was widely opposed in Hawaii and many US scholars still debate whether these lands were taken in proper accordance with the US constitution.

Due to these legal complexities, the lands are referred to as “ceded lands”, unlawfully and violently taken, and it is this ceded land that Maunakea is part of. “Hawaii was never acquired lawfully,” said Kealoha Pisciotta, a former employee of the Mauna Kea Observatories, in an article in the Atlantic in 2015. As leader of the Mauna Kea Anaina Hou – a group dedicated to protecting Maunakea from further development – Pisciotta added “No money can buy sacredness.”

Looking back at the history of the Maunakea observatories, however, apparently money can and has bought sacredness. In the 1970s John Jeffries, a physicist at the University of Hawaii, discussed at town hall meetings the economic advantage of further development on Maunakea to convince those who were opposed. Such developments were eventually allowed, even though the land is supposedly protected by the US Historical Preservation Act due to its significance to Hawaiian culture.

Taking a moral position

We are in a similar position today regarding the proposed Thirty Meter Telescope (TMT), which, if built, would be the largest visible-light telescope on Maunakea. For years, it has faced continuing opposition by activists, including Pisciotta. Yet the TMT’s website provides little information about the valid concerns and outrage Native Hawaiian activists feel about building the TMT on Maunakea and fails to mention the mountain’s sacredness. Instead, the legitimate peaceful protests are called “unforeseen challenges” and it includes vague statements about “wanting to better understand the island’s issues as well as the cultural and natural significance of Maunakea”.

There is also no mention of the videos that show astronomers failing to stop state violence at peaceful protests, or show Hawaiian elders being arrested. There is no denouncement or admonishment of Western astronomers who have made racist characterizations of Hawaiians who oppose the development. Retired astronomer Sandra Faber at the University of California, Santa Cruz, for example, wrote in an e-mail to colleagues in 2015 that the TMT has been “attacked by a horde of native Hawaiians who are lying about the impact of the project on the mountain”.

Astrophysicists – and indeed the wider physics community – must ask themselves if Western science should continue to trample over the ideas and beliefs of native people at the insistence that it is for the “greater good”. Why, after centuries of struggle, can Indigenous people still not have a say over what happens to their own home? Why is the complete refusal to respect the millennia-old sacred beliefs of a nation in favour of an experiment somehow an accepted moral position to take?

Telescopes have already been built on the graves of Hawaii’s beloved ancestors. These people have been hurt enough. As Steve Lekson, curator of archaeology at the University of Colorado Museum of Natural History, commented to the New York Times in 2014: “Given that the US was founded on two great sins – genocide of Native Americans and slavery of Africans – I think science can afford this act of contrition and reparation.” I agree. If the TMT isn’t built on Maunakea, the physics community will be just fine.

I watched Ghez’s public lecture following the announcement of the Nobel prize unhappy, but not surprised, that the controversy over the TMT was not addressed. Instead, its virtues were extolled: the insistence that Maunakea is the perfect location due to its elevation, that it suffers little light pollution and lacks turbulent air around the peak that can spoil observations.

Enough has been done to harm the beliefs of the people to whom the land belongs. There is already a deep, poisonous vein of systemic racism that runs through many scientific fields – let’s not try to cement it.

MR-Linac commissioning – improving accuracy and efficiency with BEAMSCAN MR

Want to learn more on this subject?

The commissioning and quality assurance of an MR-Linac presents medical physicists with many technical and dosimetric challenges. Having a 3D water phantom system is beneficial for saving time and for enhanced data acquisition.

In this webinar, Joshua Kim of Henry Ford Hospital, will share his experience using the BEAMSCAN MR 3D water phantom for commissioning and quality assurance measurements of the ViewRay MRIdian MR-Linac. He will provide an overview of the ViewRay MRIdian system, address the technical challenges in beam data acquisition during commissioning, and explain how these can be overcome using the BEAMSCAN MR 3D water phantom.

Key topics covered in the webinar include:

  • Technical overview of the ViewRay MRIdian system.
  • Dosimetric challenges of MR-Linacs and how to overcome them using BEAMSCAN MR.
  • Current workflow for commissioning and acceptance testing, including areas for increasing efficiencies.
  • Quality assurance workflow using BEAMSCAN MR:
    – Phantom set-up for measurements.
    – Beam data acquisition – profile and PDD measurements with different detectors.
    – Small field measurements using PTW’s microDiamond detector.
    – Transmission reference detector.
    – Comparison and validation of data.
  • Routine machine QA measurements.

Don’t miss this opportunity to gain valuable tips and advice on commissioning and QA measurements on the ViewRay MRIdian MR-Linac.

Want to learn more on this subject?

Joshua Kim holds a PhD in biomedical sciences/medical physics from Oakland University. He is board-certified in therapeutic medical physics, and currently serves as a medical physicist at Henry Ford Health System, Department of Radiation Oncology, Detroit (MI), USA, where he is also responsible for commissioning and quality assurance of the ViewRay MRIdian MR-Linac. His clinical work and research interests include new modalities for simulation imaging and image-guided radiotherapy, with a focus on online adaptive radiotherapy.

Neutron-rich tantalum offers a view of how heavy elements are forged

A beam of neutron-rich tantalum ions has been created for the first time by an international team of physicists working at RIKEN’s KEK Isotope Separation Facility (KISS) in Japan. The feat was achieved by Philip Walker at the University of Surrey and colleagues, who used state-of-the-art isotope separation techniques to isolate and study the ions. Their research could soon shed new light on how nuclear processes in dying stars create the heavy elements we observe in the universe today.

Rapid neutron capture, also known as the “r-process”, is a series of nuclear reactions that astrophysicists believe is responsible for about half of the elements heavier than iron in the universe. The process is thought to occur in core-collapse supernovae and neutron-star mergers. It involves the successive capture of neutrons by nuclei to create neutron-rich isotopes that eventually become stable heavy nuclei.

To gain a better understanding of the r-process, physicists study short-lived, neutron-rich nuclei that are made in accelerators. Neutron-rich tantalum nuclei are of particular interest because they could offer a way of studying what happens when a nucleus acquires 126 neutrons, where a neutron shell should close and the r-process should come to a temporary halt.

Array of detectors

In this latest study, researchers were able to isolate and study tantalum-187 nuclei, which have 114 neutrons and 73 protons. The isotope was created by firing xenon ions into a tungsten target. The collision products are then stopped in high-pressure argon gas and a tuneable laser is used to ionize the tantalum. The tantalum ions are then extracted and isolated in a beam that transports the ions to an array of detectors to study the radiation emitted when the nuclei undergo radioactive decay.

The collision process can create tantalum nuclei in high angular momentum states (called isomers) and Walker and colleagues focussed on one such isomer. By looking at the radiation given off when the isomer decayed to the ground state of tantalum-187 they found that the rapidly rotating nucleus adopted both prolate (American football shaped) and oblate (squashed sphere) structures.

The team say that its results clearly show that the KISS instrument can measure the properties of heavy, neutron-rich nuclei. In future studies, they will now aim to investigate how the addition of more neutrons could tip the shapes of tantalum nuclei into the fully oblate.

They also hope to study tantalum-199, which is expected to have a closed neutron shell that would temporarily halt the r-process. If this could be achieved, the reward would be new insights into how heavy elements are forged in supernovae and other violent astrophysical events. “It now seems to be a real possibility to go further and reach uncharted tantalum-199, with 126 neutrons, to test the exploding-star mechanism,” says Walker.

The research is described in Physical Review Letters.

New family of quasiparticles appears in graphene

Researchers at the University of Manchester in the UK have identified a new family of quasiparticles in superlattices made from graphene sandwiched between two slabs of boron nitride. The work is important for fundamental studies of condensed-matter physics and could also lead to the development of improved transistors capable of operating at higher frequencies.

In recent years, physicists and materials scientists have been studying ways to use the weak (van der Waals) coupling between atomically thin layers of different crystals to create new materials in which electronic properties can be manipulated without chemical doping. The most famous example is graphene (a sheet of carbon just one atom thick) encapsulated between another 2D material, hexagonal boron nitride (hBN), which has a similar lattice constant. Since both materials also have similar hexagonal structures, regular moiré patterns (or “superlattices”) form when the two lattices are overlaid.

If the stacked layers of graphene-hBN are then twisted, and the angle between the two materials’ lattices decreases, the size of the superlattice increases. This causes electronic band gaps to develop through the formation of additional Bloch bands in the superlattice’s Brillouin zone (a mathematical construct that describes the fundamental ideas of electronic energy bands). In these Bloch bands, electrons move in a periodic electric potential that matches the lattice and do not interact with one another.

Hofstadter’s butterfly

In 2013, the Manchester team led by Andrei Geim and Alexey Berdyugin, along with two independent groups at the Massachusetts Institute of Technology and Columbia University in the US, observed a stunning fractal pattern in plots of electron density versus magnetic field strength in these graphene-hBN superlattices. This pattern, known as “Hofstadter’s butterfly”, emerged when the teams determined the energy spectrum of the superlattices by measuring their electrical conductivity in strong magnetic fields of up to 17 Tesla.

The Manchester researchers now report another surprising behaviour of electrons in such structures, again under strong magnetic fields. “It is well known that in a zero magnetic field, electrons move in straight trajectories and if you apply a magnetic field they start to bend and move in circles, which decreases the conductivity,” explain team members Julien Barrier and Piranavan Kumaravadivel, who carried out the experimental work. “In a graphene layer aligned with hBN, electrons also start to bend, but if you set the magnetic field at specific values, the conductivity increases sharply. It is as if the electrons moved in straight line trajectories again, like in a metal with no magnetic field anymore.”

Novel Brown-Zak quasiparticles

Such behaviour is “radically different from textbook physics”, Kumaravadivel says, and he and his colleagues attribute it to the formation of novel quasiparticles that represent a new class of metallic state. These quasiparticles are known as Brown-Zak fermions, and according to Berdyugin, they move at exceptionally fast ballistic speeds throughout the graphene-hBN structure despite the extremely high magnetic field. This is because, unlike electrons, which rotate with quantized orbits in the presence of a magnetic field, the Brown-Zak fermions follow a straight trajectory tens of microns long in magnetic fields of up to 16 T.

“Under specific conditions (that is, whenever the ‘cyclotron radius’ of the fermions is a multiple of the moiré lattice constant), we found that the fast-moving quasiparticles feel no effective magnetic field,” Barrier tells Physics World.

Implications for device engineering

The graphene used to prepare the Manchester team’s device is very pure, which makes it possible for the charge carriers within it to achieve mobilities of several million cm2/Vs. Such high mobilities imply that the charge carriers could travel straight across the entire device without scattering, and they are much sought after when fabricating 2D materials because they could make it possible to develop ultrahigh frequency transistors. Computer processors containing devices of this type would be able to perform a greater number of calculations in the same amount of time, resulting in a faster machine.

The researchers say that the Brown-Zak fermions they observe are new metallic states that should be generic to any superlattice system, not just graphene. This makes their findings important for fundamental electron transport studies, as well as for characterizing and understanding novel superlattice devices based on 2D materials other than graphene.

Spurred on by this result, Barrier and his colleagues say they now plan to explore anomalous features of Brown-Zak fermions that do not match the Hofstadter theoretical framework. Full details of the research are reported in Nature Communications.

Could 4D MRI be a major leap forward for foetal imaging?

© AuntMinnieEurope.com

Clinicians could have better information about foetal heart health thanks to a new imaging method developed by researchers from King’s College London. The team described how they used 4D MRI to measure volumetric blood flow in a recent paper.

To achieve 4D imaging, the researchers reconstructed multiple 3D images into cine loops that simulate foetal heartbeats. The loops allowed two cardiologists to visualize foetal blood flow, a component of heart health that can’t be easily observed with current prenatal imaging technologies.

While the MRI technique is similar to ones used for adult cardiac imaging, the team had to overcome numerous technical limitations for prenatal imaging, including correcting for foetal motion and a much faster heart rate.

The result is a “massive leap forward” for foetal cardiac MRI, according to Kuberan Pushparajah, the study’s second author and a senior lecturer in paediatric cardiology at King’s College London. “We will now be able to simultaneously study the heart structures and track blood flow through it as it beats using MRI for the first time,” Pushparajah said in a press release.

4D MRI

The research team tested the new 4D imaging method with seven foetal cases, including three healthy subjects, two subjects with right-sided aortic arches and two with other cardiac abnormalities. The subjects ranged from 24 to 32 weeks in gestational age.

When two foetal cardiologists assessed the 4D MR images, they observed pulsatile blood flow through the entire cardiac cycle. The blood flow patterns appeared as expected on both 2D and 3D visualizations, the authors noted in a paper published in Nature Communications.

The readers used the 4D images to successfully delineate 96% of 140 possible vessel segmentations. They also used the regions-of-interest to create flow curves, which had a 97% success rate and showed pulsatile, phasic blood flow in major arteries for most subjects.

One drawback of the imaging method was that it performed relatively poorly for visualizing flow through the ductus arteriosus (DA). When the readers graded their confidence in their delineation of all structures, they gave DA the lowest score by far, indicating the vessel boundary was poorly visualized. All other structures received scores showing that at least part of the vessel boundary was clearly defined.

The trouble with DA could be attributed to the MRI method’s low spatial resolution and long temporal resolution, which can result in blurring for the smallest boundaries. Other problems with the MRI technique included some inter-repeatability bias, including that one reader scored blood flows slightly faster than the other.

Despite these drawbacks, the readers still observed flow fastest in the outflow tracts and slower in the inferior vena cava and superior vena cava. This demonstrates the method’s efficacy for identifying flow abnormalities, according to co-lead study author Tom Roberts.

“The results in this paper are exciting because no one has been able to look at the foetal heart using MRI in four dimensions like this,” stated Roberts, a perinatal imaging research associate at King’s College London. “Doctors can start to measure how much blood is pumped out in each heartbeat, which can be used to tell how effectively the heart is performing.”

The method still needs to be optimized and tested at different field strengths, the authors noted. They also called for studies that compare their 4D imaging method to Doppler ultrasound.

The team hopes to one day use the method to perform better diagnosis of congenital heart disease (CHD). This is particularly important because some forms of CHD are difficult to diagnose on ultrasound.

“If CHD is detected prior to birth, then doctors can prepare appropriate care immediately after birth, which can sometimes be life-saving,” stated Roberts, “It also gives parents advance time to prepare, when otherwise the CHD might have been discovered at birth, which can be very stressful.”

  • This article was originally published on AuntMinnieEurope.com ©2020 by AuntMinnieEurope.com. Any copying, republication or redistribution of AuntMinnieEurope.com content is expressly prohibited without the prior written consent of AuntMinnieEurope.com.

Ask me anything: Michelle Simmons

Michelle Simmons

What skills do you use every day in your job?

There are skills and traits and the most important trait is optimism. Research is hard and so every time you come into the lab you have to maintain an optimistic persona. Also I’ve found it helps to be generally hard working and deep thinking. In terms of practical skills, it is important to be able to sift through lots of information and get to the heart of an issue quickly, while maintaining focus. I’ve also found it’s vital to be able design and build things – whether building a piece of apparatus, going into a clean room to make a device, or having the ability to design something from scratch.

What do you like best and least about your job?

The thing I like doing best is understanding things and doing that in a forum where I have actually read a lot first myself and have open questions that I can brainstorm with people in my team.

Another experience you can’t beat is getting data from a device and then trying to understand it for the first time. That process of not understanding something and then finally nailing it, which can take up to two, three or four years, beats pretty much most experiences in life. I also like building teams – bringing together lots of different skill sets and techniques and then training people to learn those skill sets so we can be in control of our destiny. Executing an experiment in a really good way and then brainstorming it in the end, that’s unbeatable. The worst thing is the admin and bureaucracy. Everything’s online these days and filling in the endless forms gets in the way of the good stuff.

What do you know today, that you wish you knew when you were starting out in your career?

I wish I had known to trust my instincts more. Everyone has a gut feeling and it’s there for a reason, but you need to push that to the end to find out if it’s real or not. I also used to think that when you’re designing, building or testing things you need to have clear space – that you would need to have large blocks of time to do that. But now I’ve learned that every minute of the day matters and you can fit things in if you try – I’ve become more and more efficient the older I get. I also wish I had known how important programming was going to be. For my generation, programming was still something new but being able to programme well and do statistical analysis are two skills that are vital for the future, which will be a data-driven world.

Has the proton radius puzzle been solved at last?

A new and extremely precise measurement of the radius of the proton using hydrogen spectroscopy has been made by Thomas Udem, Randolf Pohl and colleagues at the Max Planck Institute for Quantum Optics (MPQ) in Germany. Their result is similar to a measurement made ten years ago using muonic hydrogen. The muonic result came as a big surprise because it was a significantly smaller value than much of the data published previously. But there is still disagreement amongst physicists about whether this latest measurement settles the “proton radius puzzle” in favour of the smaller value.

Spectroscopy can be used to calculate the proton’s charge radius because the energy levels of the hydrogen atom can each be expressed in terms of just two unknown parameters – the proton radius and the Rydberg constant, with the latter providing an energy scale for all of atomic physics. So, just two spectroscopic measurements are required to calculate the radius. Usually these are of the most precisely known energy transition – between levels 1S and 2S – and one other, such as of the Lamb shift (2S-2P).

The proton radius can also be determined by scattering electrons off gaseous or liquid hydrogen and until a decade ago, the results of all experiments agreed with each other. That agreement was expressed in the precise official figure released by the Committee on Data for Science and Technology in 2014, which is 0.8768 ±0.0069 10-15 m, where 10-15 m is a femtometre (fm).

Muonic hydrogen

Four years earlier, however, a team including Pohl had obtained a new value using spectroscopy of muonic hydrogen – a hydrogen atom with a muon instead of an electron. This measurement was more precise than previous attempts and well outside the error bars – at 0.84184 ±0.00067 fm. That difference prompted excited speculation that electrons and muons might perhaps experience different interactions with protons. However, the lower number has since been confirmed in ordinary hydrogen – via spectroscopy as well as a scattering experiment carried out at the Thomas Jefferson National Accelerator Facility in the US.

Just to complicate the picture further, researchers from the Sorbonne University and Paris Observatory in France in 2018 reported having measured the 1S-3S transition and obtaining a value in line with the official one. The latest research by Udem’s MPQ team also looks at this transition but comes to the opposite conclusion.

Paris group member Simon Thomas points out that the 1S-3S measurement, in common with those of other transitions from hydrogen’s ground state, is complicated by the need for ultraviolet radiation. The idea is to make hydrogen atoms simultaneously absorb two photons with wavelengths of 205 nm. But generating such short wavelengths involves the use of non-linear crystals or gas targets, which are very inefficient.

Frequency comb

Thomas and colleagues used a continuous-wave laser for their experiment, which makes it easier to single out the excitation frequency but results in low output powers. In contrast, Udem’s team has exploited a device known as a frequency comb. This generates a broad spectrum of radiation, consisting of a series of very narrow and equally spaced “teeth”– allowing frequencies to be distinguished from one another. At the same time, this spectral breadth leads to very narrow pulses temporally, which boosts ultraviolet intensities and improves statistics compared with continuous-wave lasers.

As they report in Science, Udem and colleagues at MPQ lowered their statistical uncertainty to just above the minimum level imposed by Heisenberg’s uncertainty principle. They also reduced several systematic effects, including noise introduced by the Doppler shift – which they did by using liquid helium to cool the hydrogen atoms down to a few degrees above absolute zero. Totting up all the possible sources of error, they achieved an accuracy nearly four times better than that of the Paris group.

Taking their new measured value of the 1S-3S frequency – a number consisting of 14 significant figures – Udem and colleagues combined it with a previous measurement they made of the 1S-2S transition. The result is a proton radius of 0.8482 ±0.0038 fm.

“Finally resolved”

Writing a commentary piece to accompany the paper, Wim Ubachs of Vrije Universiteit in Amsterdam, the Netherlands, argues that the latest result has “finally resolved” the proton radius puzzle, which, he adds, “should provide an intriguing topic for historians and sociologists of science”.

The leader of the Proton Radius Experiment (PRad) at Jefferson Lab in the US, Ashot Gasparian of North Carolina A&T State University, agrees that the proton radius puzzle is close to being solved as far as spectroscopic measurements are concerned. But he maintains that the situation is more complex regarding electron-proton scattering, pointing out that his collaboration’s results are only three standard deviations smaller than those from other modern scattering experiments. More accurate experiments are needed to solve the puzzle definitively, he argues, adding that the Jefferson Lab recently approved such an experiment – which, he says, could produce results within three years.

For Thomas, in contrast, the 1S-3S data still need to be clarified. While he thinks that the small value of the proton radius is the “more reliable” of the two, he says it is still not completely clear “what kind of systematic effects could have shifted the result” obtained by his group. To that end, he and his colleagues, in common with the MPQ group, are carrying out measurements of the 1S-3S transition in a very slightly different atom – deuterium, which contains a neutron as well as a proton in its nucleus.

Whatever the explanation, he thinks that the chances of there being new physics that dictates different interactions for muons and electrons are slim. That, he points out, is “contrary to what could have been thought when the proton radius puzzle began”.

Black hole diaries: new book unveils secrets of stellar voids

Black hole. Two rather mild and trivial words in themselves, but put them together and you have the stuff that physics, astronomy and sci-fi dreams are made of. Indeed, black holes seem to evoke similarly enthusiastic reactions from astronomers, cosmologists and laypeople alike. The mere thought of these behemoth regions of space–time – shrouded in fire, consuming everything within reach, including light – inspires interest like few other phenomena in the natural world, especially one that is deeply complex to comprehend.

It was only last month that the Nobel committee awarded the 2020 physics prize to Roger Penrose, Reinhard Genzel and Andrea Ghez for their work on black holes: Penrose “for the discovery that black-hole formation is a robust prediction of the general theory of relativity”, and Genzel and Ghez “for the discovery of a supermassive compact object at the centre of our galaxy”. So for all of you with black holes on the brain, physicist John Moffat’s latest book, The Shadow of the Black Hole, may be just the ticket. Moffatt, a veteran cosmologist, is now professor emeritus of physics at the University of Toronto and a member of the Perimeter Institute for Theoretical Physics. This is his fourth popular-science book (though I am unsure if it does indeed fall into that category) and follows his notable books on particle physics, relativity and Einstein.

Moffat is perhaps best known for his work on gravity, and so it is unsurprising that this latest book is a comprehensive take on its fundamental concepts, as experienced in the most extreme of testing grounds – a black hole. In some ways, it is difficult to describe the exact focus of this book, as it nearly becomes too all-encompassing for what is essentially a 250-page pop-sci primer. You could say that The Shadow of the Black Hole mainly features and celebrates the theories, technologies and results of two of the most significant scientific experiments of this decade.

In 2016 the Laser Interferometer Gravitational-wave Observatory (LIGO) announced its seminal, first ever detection of gravitational waves, produced via the collision of two black holes. More recently, in 2019, the first direct visual evidence (a photograph, to put it simply) of a black hole and its “shadow” was taken by astronomers working on the Event Horizon Telescope (EHT). Those familiar with the work on both these experiments will already be aware of the many different concepts and theories that LIGO and the EHT are based on. These include special and general relativity as well as stellar evolution. Then there’s the interferometric techniques used in LIGO’s Michelson–Fabry–Pérot interferometers and the EHT’s very-long-baseline interferometry. There are also event horizons, singularities, Schwarzschild solutions and Hawking radiation. (I could go on, and on, and on…)

Not content with covering the vast array of these subjects, Moffat also covers thermodynamics, quantum mechanics, time travel and wormholes, particle physics and various cosmological models, and even has a chapter on alternative gravitational theories, which includes the author’s own modified theory of gravity. These extra topics, combined with the heady mix of historical background on the who and how of black-hole physics as it developed through the 21st century, makes this anything but an easy read. Indeed, the author’s narrative style of switching swiftly from historical details to in-depth scientific explanation is often dizzying. Also, as with many science books, I didn’t appreciate chapters jumping back and forth in time, though I acknowledge it is difficult to write a purely chronological account when talking about multiple projects.

Too often I simply needed a break between chapters (if not during), to absorb and work my way through the large amount of information I’d consumed. But, despite not being a breezy read, by the end of The Shadow of the Black Hole, I was left satisfied (albeit exhausted) by the pace and details the book included, not to mention totally caught up on all things black-hole related, right up the most recent of research.

Despite not being a breezy read, I was left satisfied and totally caught up on all things black-hole related

Apart from Moffat’s obvious and wide-ranging knowledge of black-hole and gravity research, I did enjoy the more personal moments in the book, where he talks about meetings, conferences and chats with significant people from the field, spanning his career over the last 60 years. The book’s long prologue, simply titled “LIGO”, details him and his wife Patricia visiting LIGO’s Hanford site in Washington state. The first-person narrative that runs through this chapter is sweet, if a bit over-earnest at times.

In a later chapter on the history of gravitational-wave detectors, he describes meeting Joe Weber (who in 1969 made the first, controversial claim of detecting gravitational waves using a “vibrational bar”) at an international conference in China in the late 1980s. Moffat talks about how the pair decided to go jogging together each day, taking to the streets of Shanghai at the break of dawn. “During periods of catching our breath, overlooking Shanghai’s busy harbour, we would snatch bits of physics conversation, and I talked to him about his gravitational wave experiments. He was bitter about his treatment by the physics community, and still insisted that he was right in his claim of having detected gravitational waves,” writes Moffat.

I would think twice before recommending this as light reading for someone with a general interest in science, but it is better geared to a reader already well-versed in basic black-hole physics. This book is probably the perfect primer for an undergraduate considering a future in cosmology, or for a physicist looking to get a whistle-stop update on gravity, LIGO and the EHT. For those who do persevere through The Shadow of the Black Hole, you will find yourself once more amazed by these stellar graveyards, and the secrets they hold.

  • 2020 Oxford University Press 226pp £19.99hb

Arches of chaos in the solar system, luxury watch has bits of Stephen Hawking’s desk

If we had a “Physics paper title of the year award”, the 2020 winner would surely have to be “The arches of chaos in the solar system”, which was published this week in Science Advances by Nataša Todorović, Di Wu and Aaron Rosengren. In their paper, the trio “reveal a notable and hitherto undetected ornamental structure of manifolds, connected in a series of arches that spread from the asteroid belt to Uranus and beyond”. These manifolds are structures that arise from the gravitational interactions between the Sun and planets. They play an important role in spacecraft navigation and also explain the erratic nature of comets.

The paper is beautifully written, describing the manifolds as “a true celestial autobahn,” and going on to say that they “enable ‘Le Petit Prince’ grand tour of the solar system”. And if that has not piqued your curiosity, the figures are wonderful as well – with the above image being “Jovian-minimum-distance maps for the Greek and Trojan orbital configurations”.

The luxury watchmaker Bremont has released the Hawking Limited Edition watch that contains bits of a wooden desk once used by the late Stephen Hawking. The “exquisite chromometer” also contains pieces of a meteorite and is etched with a view of the night sky as seen from Oxford on 8 January 1942, Hawking’s place and date of birth. What is more, the serial number of the watch is printed on paper from a 1979 paper by Hawking that was cowritten by Gary Gibbons.

One of these watches can be yours for as little as £7995, if you settle for the stainless-steel or “quantum” models, but the white gold model will set you back £18,995.

Interestingly, a preprint of a 1979 paper by Hawking and Gibbons fetched £3000 at auction in 2018. So, check your filing cabinets, there might be something there that you can sell to a luxury watchmaker.

Copyright © 2025 by IOP Publishing Ltd and individual contributors