Skip to main content

Deep learning enables rapid detection of stroke-causing blockages

LVO detection

Strokes are life-threatening medical emergencies where urgent treatment is essential. They occur when part of the brain is cut off from its normal blood supply. The most common type of stroke (accounting for almost 85% of all cases) is an ischemic stroke, which is caused by a clot interrupting the supply of blood to the brain. Large vessel occlusion (LVO) strokes occur when such a blockage is found in one of the major arteries of the brain. As LVO strokes are more severe, they require immediate diagnosis and opening of the blocked artery as fast as possible.

In clinical practice, the most common method used to detect LVOs is an imaging modality called CT angiography. This method provides clinicians with a detailed, 3D image of the blood vessels in the patient’s brain. A newer CT technique, multiphase CT angiography, provides more information than its single-phase counterpart through the acquisition of cerebral angiograms in three distinct phases: peak arterial (phase 1), peak venous (phase 2) and late venous (phase 3). The main advantage of this method comes from its potential to detect any lag in filling of vessels, thus allowing the clinicians to perform a time-resolved assessment.

A group of researchers led by Ryan McTaggart from Brown University has developed a tool with the potential to quickly identify and prioritize LVO patients in an emergency setting. To achieve this, they built and trained a convolutional neural network capable of classifying the presence of LVOs on CT angiographies. This is the first study that uses deep learning to identify LVOs in both anterior and posterior arteries using multiphase CT angiography images. Their results are summarized in Radiology.

A deep-learning model that can classify LVOs…

To train, validate and test their model, the researchers used a dataset of 540 subjects with multiphase CT angiography exams. Of these, 270 patients had confirmed presence of an LVO, while the other 270 were LVO-negative. Each CT scan underwent a series of pre-processing steps.

First, the researchers standardized their scans through isotropic resampling (voxel resolution of 1 mm3), image resizing (500 x 500 pixels) and intensity normalizing (between 0 and 1). Then, they employed a vessel segmentation algorithm to increase the images’ signal-to-noise ratio. Finally, to further enhance the blood vessels, they selected 40 most-cranial axial slices from each subject and produced a single 2D image through a technique called maximal intensity projection.

To evaluate the diagnostic performance of the proposed deep-learning model, the group decided to experiment with seven training strategies. In each strategy, the team used a different subset of the multiphase CT angiography data: each phase alone, or various combinations (phases 1 and 2, phases 2 and 3, phases 1 and 3, and all three phases together).

… achieves high diagnostic performance

The group used a dataset of 62 patients (31 LVO-positive and 31 LVO-negative) to test their seven strategies. The model performed best when using all three phases as the input, where it achieved a sensitivity of 100% (31 out of 31) and a specificity of 77% (24 out of 31). Moreover, combining the peak arterial (phase 1) and late venous (phase 3), or the peak venous (phase 2) and late venous (phase 3), resulted in significantly better models than using just single-phase CT angiography.

False-positive prediction

The model gave good predictions across patient demographics, multiple institutions and scanners. Also, it detected both anterior and posterior circulation occlusions. “[Our model] could function as a tool to prioritize the review of patients with potential LVO by radiologists and clinicians in the emergency setting,” the researchers conclude. For future work, the group aims to evaluate the model’s clinical utility by testing it in a real-time emergency setting.

Japanese Nobel-prize-winning neutrino pioneer Masatoshi Koshiba dies aged 94

The Japanese physicist Masatoshi Koshiba, who shared 2002 Nobel prize for the detection of cosmic neutrinos, died on 12 November aged 94. One of the founders of neutrino astronomy, Koshiba’s most famous work involved detecting neutrinos from a distant supernova explosion using a vast detector based in a mine in central Japan. He shared half the 2002 Nobel prize with Raymond Davis Jr “for pioneering contributions to astrophysics, in particular for the detection of cosmic neutrinos”. The other half went to Riccardo Giacconi for his work on the discovery of cosmic X-ray sources.

Koshiba was born on 19 September 1926 in Toyohashi, Japan and graduated from the University of Tokyo in 1951. He then moved to the US to complete a PhD at the University of Rochester, graduating in 1955. After three years at the University of Chicago he moved back to Tokyo, where he remained for the rest of his career.

In the 1980s, Koshiba was instrumental in the construction of a neutrino detector located 1000 metres underground in a lead and zinc mine in Japan. Called Kamiokande, it was an enormous water tank surrounded by photomultiplier tubes to detect the flashes of light produced when neutrinos interacted with atomic nuclei in water molecules.

Koshiba created a legacy that will continue to drive the field forward making scientific advances for many years

Dave Wark

Although vast numbers of neutrinos are produced by the Sun, they are difficult to detect because they interact very weakly with matter. In 1967 Davis, who was then at the Brookhaven National Laboratory, built the first experiment to detect solar neutrinos. Consisting of 600 tonnes of dry-cleaning fluid in the Homestake gold mine in South Dakota, it detected less than half the flux of neutrinos predicted by widely accepted models of the Sun.

The results – known as the “solar-neutrino problem” – could be explained only if these models were wrong or if the neutrino had mass. For 20 years Davis’s detector was the world’s only solar neutrino detector. But 20 years later Koshiba and colleagues began taking data with their Kamiokande detector and in 1987 confirmed the lower-than-expected neutrino flux that was reported by Davis by detecting neutrinos from a distant supernova explosion. For this work, Davis and Koshiba were awarded the 2002 Nobel Prize for Physics.

Creating a legacy

In 1998, Kamiokande’s successor – SuperKamiokande – found convincing evidence for neutrino mass in the form of oscillations between tau and muon neutrinos, which required new physics beyond the Standard Model of particle physics. This result led to Takaaki Kajita, a former student of Koshiba, sharing the 2015 Nobel Prize for Physics with the Canadian physicist Arthur McDonald for the discovery of neutrino oscillation.

In comments to the Mainichi newspaper, Kajita said that Koshiba had taught him everything about physics. “I was very lucky to be able to meet Koshiba, and it was very important for me in becoming a scientist,” he said.

Neutrino physicist Dave Wark from Oxford University in the UK, who has been involved with the Super Kamiokande experiment for the past two decades, says that it was Koshiba’s “force of will” that got Kamiokande approved in Japan and the Nobel laureate’s initiative that led to the creation of the huge photomultiplier tubes that have “been so important in the advance” of neutrino physics. “The sequence of Kamiokande leading to Super Kamiokande and now to Hyper Kamiokande has so far earned two Nobel Prizes, vindicating his original vision,” Wark told Physics World.

Wark adds that Koshiba was “a great guy to be around, filled with insight and energy and humour” and that Koshiba has “created a legacy that will continue to drive the field forward making scientific advances for many years”.

Hundreds of copies of Newton’s Principia found, how to cross stitch a black hole

I think few would argue that Isaac Newton’s Philosophiae Naturalis Principia Mathematica is the most famous book ever written about physics. First published in 1687, the tome outlines Newton’s laws of motion and universal gravitation – which underline much of modern physics.

Now, Caltech’s Moti Feingold and Andrej Svorenčík of the University of Mannheim have scoured the planet for first-edition copies of Principia and discovered nearly 200 more than were previously listed in a census done in 1953. This brings the total of known first editions to 386, out of 600–750 copies that are believed to have been printed.

The duo spent more than a decade tracing and studying copies of the book. By looking at ownership marks and notes scribbled in the margins of some of the books – as well as related letters and other documents – they have concluded that Newton’s masterpiece has been more widely read than previously thought.

“One of the realizations we’ve had,” says Feingold, “is that the transmission of the book and its ideas was far quicker and more open than we assumed, and this will have implications on the future work that we and others will be doing on this subject”.

Stolen copy

Some of the newly-identified copies were found “behind the Iron Curtain” in eastern European countries that were not accessible to those doing the 1953 census. The team even found a stolen copy of the book, which can fetch as much as $3m at auction, but unfortunately the owner did not act quickly enough to recover it.

You can read more in Caltech News.

As the nights draw in here in the northern hemisphere and COVID-19 restrictions become tighter, cross stitching could be the ideal hobby to keep the gloom at bay.

Physics has a lovely article called “Pixels to stitches: embroidering astronomy images”, which looks back on five decades of cross-stitching astronomical images. Author Erika Carlson explains why cross stitching is a perfect medium for capturing the beauty of space. The article is illustrated with the work of embroiderers including Adi Foord, an astrophysicist at Stanford University, who has reproduced that iconic image of a black hole captured by the Event Horizon Telescope.

Ultrafast camera breaks 3D speed record

A new camera that takes videos at record-breaking speeds of up to 100 billion frames per second in 3D has been demonstrated by researchers at the California Institute of Technology in the US. The feat was made possible by a technique known as single-shot stereo-polarimetric compressed ultrafast photography (SP-CUP), and it builds on the group’s earlier work – including a camera that takes images at 70 trillion frames per second, which is fast enough to see light travel. The latest device could prove important for biomedicine, agriculture, electronics and other fields that rely on fast, high-dimensional optical imaging.

In order to image processes that take place at the speed of light, researchers need frame rates at the level of one billion frames-per-second (Gfps). This is way beyond the readout speed of even the most advanced charge-coupled (CCD) devices and complementary metal oxide semiconductor (CMOS) sensors used in most modern ultrafast photography. While sub-nanosecond frame interval devices have been developed, their low sequence depth (that is, a low number of frames captured per acquisition) means that they cannot be used to image luminescent and colour-selective objects such as distant stars and bioluminescent molecules.

Imaging high-speed processes in multiple dimensions poses additional challenges. Most high-dimensional optical imaging systems acquire data by scanning – that is, they capture either a one-dimensional (1D) column or a two-dimensional (2D) slice of an object in separate measurements. By combining a series of such measurements, these systems can build up higher-dimensional images of the object. This approach has its drawbacks, however, with the main one being that successive measurements in the series need to be very precise for the multidimensional picture to “stitch together” properly.

Combining previous techniques

To overcome this (and other inherent problems), imaging specialists have turned instead to single-shot high-dimensional CUP optical imaging techniques. This form of image acquisition operates in parallel, and it has improved the efficiency of CUP to such an extent that an offshoot known as single-shot temporal imaging is creating a flurry of interest in the optical imaging community. Using this technique, it is possible to capture a photon’s time of arrival without repeated measurements – a hitherto hard-to-achieve feat that could help scientists better understand fleeting phenomena in physics, chemistry and biology that are difficult or impossible to reproduce.

The single-shot stereo-polarimetric compressed ultrafast photography (SP-CUP) technique developed by Lihong Wang and colleagues combines many of these previous techniques into a single device. The combination of compressed sensing, streak imaging (a traditional 1D ultrafast imaging approach), spectroscopy and polarimetry produces a single-shot passive ultrafast imaging device that can capture non-repeatable phenomena evolving in 5D (three spatial dimensions, plus time and ψ, the angle of the linear polarization of reflected light) with picosecond temporal resolution.

Like human vision

In Wang’s view, his team’s new camera “sees” more like humans do. “When we look at the world around us, we perceive that some objects are closer to us and some further away,” he explains. “This depth perception is possible because our two eyes each observe objects and their surroundings from a slightly different angle. The information from these two images is combined by the brain into a single 3D image.”

The SP-CUP device works in essentially the same way, and like humans, it sees in stereo. “While the camera does only have one lens, it functions as two halves that provide two views with an offset,” Wang says. “Two channels in the device mimic our eyes.” In addition, he notes that his group’s camera has an ability that no human possesses: it can sense the polarization of light waves.

Powerful tool

Wang believes that the SP-CUP’s combination of high-speed three-dimensional imagery and the use of polarization information makes it a powerful tool that may be applicable to a wide variety of scientific problems. One area of particular interest is the physics of sonoluminescence, a phenomenon in which sound waves create tiny bubbles in water or other liquids. As the bubbles rapidly collapse after forming, they emit a burst of light.

“Some people consider this one of the greatest mysteries in physics,” Wang says. “When a bubble collapses, its interior reaches such a high temperature that it generates light. The process that makes this happen is very mysterious because it all happens so fast, and we’re wondering if our camera can help us figure it out.”

Full details of the research are reported in Nature Communications.

Quantum technology: why the future is already on its way

Why do you need a quantum computer? Well, you don’t – unless someone else has one. And if they have a quantum computer, you’ll want one too. Driven by the promise of new technologies that will deliver benefits for all of society, nations around the world are investing heavily in the field. When it comes to quantum computing, no-one wants to miss out – and that desire is triggering a kind of global arms race.

While a standard computer handles digital bits of 0s and 1s, quantum computers use quantum bits or qubits, which can take any value between 0 and 1. And if you entangle the qubits, you can solve problems that classical computers cannot. A future quantum computer could, for example, crack any of today’s common security systems – such as 128-bit AES encryption – in seconds. Even the best supercomputer today would take millions of years to do the same job.

I’m delighted that here in the UK, construction of the new National Quantum Computing Centre will start this year

The US National Institute of Standards and Technology has already said that quantum computers will be able to crack the existing public-key infrastructure like 128-bit AES encryption by 2029. That prospect means businesses and governments are scrambling to improve the security of conventional networks, for example by using quantum-key cryptography. That’s a new market for quantum technology that’s expected to be worth anything from $214m to $1.3bn by 2024 (depending on which market survey you read).

Important investments

With all that in mind, I’m delighted that here in the UK, construction of the new National Quantum Computing Centre (NQCC) will start this year. Thanks entirely to a £93m investment from UK Research and Innovation (UKRI), the centre is being built at the Harwell lab of the Science and Technology Facilities Council in Oxfordshire. When it opens in late 2022, the NQCC will bring together academia, business and government with the aim of delivering 100+ qubit user platforms by 2025, thereby allowing UK firms to tap fully into this technology’s potential.

The NQCC is part of a £1bn, 10-year investment by the UK’s National Quantum Technologies Programme, which was launched by the UK government in 2013. It has already created a national network of quantum technology hubs in quantum sensors and metrology (Birmingham), quantum communications (York), quantum enhanced imaging (Glasgow), and quantum IT (Oxford). Seeking to develop and commercialize new technology, the hubs are part of a growing quantum industry in the UK that saw more than 30 quantum start-ups founded by the end of 2019.

The visionaries behind the programme were none other than Peter Knight – a former president of the Institute of Physics (IOP) – and David Delpy (the IOP’s current honorary treasurer). The potential benefits of quantum technology to the UK economy were discussed at an online seminar run by the IOP’s Business Innovation and Growth group, which featured the IOP’s current president elect, Sheila Rowan, as well as Knight and the UKRI’s “challenge director”, Roger McKinley.

To do something really useful with quantum computers will require significantly more than 50 qubits

Last year, researchers at Google claimed that their Sycamore processor, which has 53 superconducting qubits, was able to verify in just 200 seconds that a set of numbers was randomly distributed. The same calculation, the firm said, would take 10,000 years on IBM’s Summit machine, which was the world’s most powerful supercomputer at the time. IBM hit back, insisting that, with some clever classical programming, its machine can solve the problem in 2.5 days. Either way, Google had reached a significant milestone towards realizing the immense promise of quantum computers – “a wonderful achievement” as Knight put it. “It shows that quantum computing is really hard but not impossible,” he added.

However, to do something really useful with quantum computers will require significantly more than 50 qubits. And given that a single qubit will set you back $10,000 or more, quantum computers will become commercially viable only when the cost per qubit has dropped dramatically. What’s more, we’ll have to get round the fact that current qubit devices are super-sensitive to external disturbances, so they have to be enclosed in sealed, cryogenically cooled boxes to maintain their quantum behaviour.

It’s worth recalling that when ENIAC – the first general purpose digital computer – was released in 1945, it could do in 30 seconds what a human could do in 20 hours. But with its vacuum tubes and vast size, ENIAC was as far removed from today’s super-advanced classical devices as today’s quantum computers will be from those in 50 years’ time. That’s why the race is on to scale and deliver a practical quantum computer, with many competing platforms, technologies and companies in the running.

Towards quantum 2.0

Superconducting qubits might ultimately be replaced by something cheaper, more practical and scalable. Proper quantum computers will also require operating systems, programming languages, algorithms, input and output hardware, as well as the all-important storage and memory. That’s why I was particularly pleased to see ORCA – a  UK firm – win one of the IOP’s business awards this year for developing a promising new approach to quantum computing memory, which allows single and entangled photons to be stored and synchronized.

Another key commercial milestone took place in September, when UK firm Cambridge Quantum Computing (CQC) launched the world’s first cloud-based Quantum Random Number Generation (QRNG) service using an IBM quantum computer. CQC offers true maximal randomness or entropy, which is impossible with a classical device and vital for accurate modelling and security applications.

It’s taken us 75 years to get from ENIAC to today’s integrated microprocessors, data centres, cloud-computing derived from “quantum 1.0” devices (semiconductor junctions, lasers and so on). Imagine a world with advanced “quantum 2.0” devices and computers 75 years from now. It’s great to see such a co-ordinated and visionary programme here in the UK right now.

  • An industry panel featuring members of the UK quantum-computing sector formed part of the IOP’s recent Quantum2020 conference.

Jupiter’s moon Europa could glow in the dark

Ice on the night-side surface of Jupiter’s moon Europa could emit a unique glow, according to US-based scientists Murthy Gudipati, Bryana Henderson at NASA’s Jet Propulsion Laboratory and Fred Bateman at the National Institute of Standards and Technology. The researchers did lab-based experiments that suggest that the ice emits visible light as a result of being bombarded by high-energy particles. Their research provides important information for NASA’s upcoming Europa Clipper mission – which could offer an unprecedented glimpse of the composition of Europa’s subsurface ocean.

Europa is Jupiter’s fourth largest moon and as it passes through the strong magnetic field of its host planet, its surface is bombarded by high-energy protons, electrons, and ions. As these particles interact with the moon’s salt- and ice-rich crust, they could trigger complex physical and chemical processes that have important consequences for Europa’s chemical composition.

On geological timescales, the energy-rich products of these reactions could be transported through Europa’s crust, before entering the vast ocean of liquid water beneath. Warmed by tidal forces, this ocean is one of the most promising candidate locations for extraterrestrial life in the solar system. Determining the composition of salt on Europa’s surface should therefore provide important clues about whether life could exist in its ocean.

High-energy electrons

In their study, the trio fired high-energy electrons at water ice containing different types of salt and analysed the light emitted by their samples. They discovered that the samples emitted characteristic light spectra at visible wavelengths. They also found that emission was enhanced in ice containing magnesium-sulphate based epsomite, and that ice containing sodium chloride and sodium carbonate gives off much less light. On Europa’s surface, this light emission would create a unique nighttime glow, not likely to be found elsewhere in the solar system.

By mapping variations in this glow on Europa’s surface, the trio believe that scientists could determine what salts are present within Europa’s ice, and in what concentrations. While dark regions could indicate the presence of sodium- and chloride-rich surfaces, while brighter areas may give away magnesium- and sulphate- dominated surfaces. Comparing these variations with observations of Europa’s daytime side would then enable astronomers to identify specific geological features by their chemical compositions.

Gudipati and colleagues now hope that through future missions, spacecraft flying low over Europa’s night-side surface could observe this light directly. This will soon be soon be possible through visible instruments aboard NASA’s Europa Clipper mission, which due for launch in 2025. These observations could be key to determining whether life could exist beneath Europa’s surface, and may also pave the way for analyses of other Jovian moons subjected to particle bombardment, including Io and Ganymede.

The research is described in Nature Astronomy.

Striving for diversity and inclusion in engineering, 100 years of ferroelectrics

This episode of the Physics World Weekly podcast features an interview with Carol Marsh, who was recently honoured by the UK’s Queen Elizabeth II for her work on diversity and inclusion. Edinburgh-based Marsh talks about her role as deputy head of electronics engineering at the aerospace and defence company Leonardo and about her efforts to get more women into science and engineering.

This week we also celebrate 100 years of ferroelectrics – materials that have found myriad applications including energy harvesting and night vision – and report from the UK’s annual Quantum Technology Showcase.

g-wave superconductor comes into view

strontium ruthenate crystal

Superconducting materials are traditionally classed into two types: s-wave and d-wave. A third type, p-wave, has long been predicted. Now, however, researchers in the US, Germany and Japan say they may have discovered a fourth, unexpected type of superconductor: g-wave. The result, obtained thanks to high-precision resonant ultrasound spectroscopy measurements on strontium ruthenate, could shed fresh light on the Cooper pairing mechanisms in so-called unconventional superconductors.

In conventional superconductors, electrons join up to form Cooper pairs that then move through a material without any resistance. While all known superconducting materials need to be cooled to ultralow temperatures (or placed under extreme pressures) before their electrons start behaving in this way, if the process could be made to happen at higher temperatures, it would in principle allow for super-efficient power grids and circuit boards that don’t produce waste heat.

Superconducting order parameter

The Cooper pairing mechanism stems from interactions between electrons and phonons (vibrations of the material’s crystal lattice) and results in a “superconducting order parameter” that is said to have s-wave symmetry. In such s-wave superconductors, which include materials like lead, tin and mercury, the Cooper pairs comprise one electron with spin up and one electron with spin down. As these electrons move head-on towards each other, their net angular momentum is zero.

Unconventional superconductors, in contrast, exhibit d-wave superconductivity. Here, the electrons in the Cooper pairs have a positive angular momentum in one direction and a negative one in a second direction, so the total spin angular momentum for each pair is again zero. However, the pairs’ orbital angular momentum is nonzero..

A third type of superconductor, known as p-type, is predicted to exist between these s and d “singlet” states. p-type superconductors have one quanta of angular momentum and their electrons pair with parallel rather than antiparallel spins. Such “spin-triplet” materials are of interest because they could be used to create Majorana fermions – exotic particles that are their own antiparticles.

Measuring the speed of sound waves

For 25 years, the main candidate for p-wave superconductivity has been strontium ruthenate (Sr2RuO4). Several recent experiments have, however, cast doubt on this hypothesis. To investigate further, researchers led by Brad Ramshaw of Cornell University sent sound waves through a Sr2RuO4crystal as they cooled it through its superconducting temperature of 1.4 K. By measuring the response of the crystal’s elastic constants to the sound waves, they were able to determine how the speed of sound changes in response to shifts in temperature.

While this high-resolution resonant ultrasound spectroscopy technique has been used before, it had never previously been tried at such low temperatures – meaning that the researchers had to build a completely new instrument. “This is by far the highest-precision resonant ultrasound spectroscopy data ever taken at these low temperatures,” Ramshaw says.

A “two-component” superconductor

The data these experiments produced indicate that Sr2RuO4 is a “two-component” superconductor, which means that the way the electrons pair up cannot be described by a single number. Instead, the description must also include a value representing the direction in which electrons pair up. This behaviour is not consistent with p-wave superconductivity, and indeed previous studies using nuclear magnetic resonance (NMR) spectroscopy had likewise suggested that Sr2RuO4 is not a p-wave superconductor.

The Cornell researchers have now backed up these findings, but they also went a step further, showing that Sr2RuO4 is in fact something else entirely: a g-wave superconductor. This means that it has a completely different type of angular momentum from either s– or d-wave materials.

“Resonant ultrasound really lets you go in and even if you can’t identify all the microscopic details, you can make broad statements about which ones are ruled out,” Ramshaw explains. “So then the only things that the experiments are consistent with are these very, very weird things that nobody has ever seen before. One of which is g-wave, which means angular momentum 4.

“No-one has ever even thought that there would be a g-wave superconductor.”

Constructing a better theory

As a next step, the researchers say they plan to continue their search for p-wave superconductivity in other candidate materials. However, they will also continue studying Sr2RuO4. “This material is extremely well studied in a lot of different contexts, not just for its superconductivity,” Ramshaw says. “We understand what kind of metal it is, why it’s a metal, how it behaves when you change temperature, how it behaves when you change the magnetic field. So you should be able to construct a theory of why it becomes a superconductor better here than just about anywhere else.”

The team, which includes researchers from the Max Planck Institute for Chemical Physics of Solids in Germany, the National High Magnetic Field Laboratory at Florida State University and the National Institute for Materials Science in Tsukuba, Japan, report their work in Nature Physics.

Neutrons give an atomic-scale view of SARS-CoV-2 replication mechanism

Neutron scattering experiments have allowed researchers in the US to map out the precise positions of hydrogen atoms in a coronavirus protein for the first time. The approach, taken by Andrey Kovalevsky and colleagues at Oak Ridge National Laboratory, reveals key clues of how binding occurs between specific enzymes and protein chains involved in the replication of the virus. The team’s discoveries could lead to advanced computational designs of targeted drugs.

Within the SARS-CoV-2 virus (which causes COVID-19) the information required for replication is encoded on two overlapping chains of protein molecules. To use this information, the virus must first break these chains down into individual functional proteins. This requires a specific enzyme called the main protease – which interacts with protein chains at a specific group of atoms called the “active site”. Currently, researchers are aiming to reduce the activity of these enzymes by developing inhibitor drugs that tightly bind to their active sites – preventing further reactions.

Developing these inhibitors requires a detailed knowledge about the locations of hydrogen atoms within the main protease enzyme because these atoms define the nature of the hydrogen bonds between enzymes and protein chains. Normally, the locations of atoms in a molecule can be determined using X-ray crystallography. That technique, however, is not sensitive to hydrogen so the team used neutron scattering instead. Another benefit of neutron scattering is that unlike X-ray techniques, it  does not cause any radiation damage to samples.

Key players

“Half of the atoms in proteins are hydrogen. Those atoms are key players in enzymatic function and are essential to how drugs bind,” explains Kovalevsky. “If we don’t know where those hydrogens are and how the electrical charges are distributed inside the protein, we can’t design effective inhibitors for the enzyme.”

Using instruments at ORNL’s Spallation Neutron Source, the researchers determined the precise structure of the virus’s main protease enzyme, and compared the result to structures obtained through X-ray crystallography. Whereas the locations of hydrogen atoms could only be inferred from the X-ray data, neutron scattering allowed Kovalevsky and colleagues to determine their positions to subatomic resolution. This enabled them to identify electrical charges, as well as intricate networks of hydrogen bonds across the enzyme’s active site.

The team is now the first to determine the exact structure of a coronavirus protein. Their findings represent a crucial advance in our understanding of how the SARS-CoV-2 virus replicates, and will now provide critical information for the computational design of inhibitor drugs that are specifically tailored for targeting the electrostatic environment of the main protease enzyme. If created, such drugs could soon become a key aspect of global efforts to contain the spread of the virus.

The research is described in The Journal of Biological Chemistry.

Tumour composition matters in radiopharmaceutical therapy

Accounting for the composition and mass of tumours could make dose calculations for radiopharmaceutical cancer therapy significantly more accurate, say researchers in the US. Edmond Olguin and colleagues, at the University of Florida and Radiopharmaceutical Imaging and Dosimetry, modelled how various radionuclides deposit energy in tumours of differing shape, size and tissue mineralization. They found that doses calculated for small tumours in bone are especially sensitive to such properties. The researchers say that their results should increase the effectiveness of radiopharmaceutical therapy, and are ready to be adopted by clinicians immediately.

While most forms of radiotherapy target individual tumours with precisely applied doses of radiation, radiopharmaceutical therapy is used to treat widely disseminated cancers by administering a radionuclide-labelled pharmaceutical to the patient as a whole. By choosing the right radionuclide, or by conjugating it to a certain drug, clinicians can make the radioactive agent accumulate selectively in tumours. This maximizes the dose to cancerous tissue while keeping the radiation burden elsewhere in the patient below harmful levels.

The most accurate way to calculate the received dose is to simulate the process using a radiation-transport model and a personalized digital phantom of the patient. But because this technique requires expertise and computational resources not available to most hospitals, it is most commonly used in research settings.

Instead, clinicians tend to use the Medical Internal Radiation Dose (MIRD) schema, a simpler method that classifies parts of the body as either radiation sources (where the radiopharmaceutical accumulates) or radiation targets (where there is a medical reason to measure exposure). In tumours that concentrate the radiopharmaceutical, these regions are one and the same.

When a single tumour is both source and target, the key to calculating the delivered dose is to determine the absorbed fraction – the proportion of energy emitted within the tumour that is deposited without escaping the region. This gives the dose per radionuclide decay, or “S-value”, which varies according to the type of particle emitted, its energy and the size of the tumour. Writing in Physics and Medicine and Biology, Olguin and colleagues show that the S-value is also sensitive to the tumour’s elemental composition, which until now has typically been modelled as soft tissue.

To investigate the effect of different tumour compositions, the researchers used a supercomputer to run the Monte Carlo N-Particle Transport (MCNP) model. They simulated photon, electron and alpha-particle irradiation of spherical and ellipsoidal tumours of various sizes and axial ratios, and with compositions ranging from 100% soft tissue to 100% mineral bone.

Considering a spherical tumour composed of soft tissue, the team found that their calculated absorbed fractions matched previous estimates to within a few percentage points for all radiation types and energies. For mineralized tumours smaller than 1.5 cm across, however, the absorbed fraction deposited by electrons was 25% larger than for soft-tissue tumours. For photons, the effect was even greater, with fully mineralized tissues absorbing as much as 71% more radiation than their unmineralized counterparts across all tumour sizes.

Olguin and colleagues also found that ellipsoidal tumours could be approximated as spheres, to a certain degree. Although electron absorbed fractions for spherical tumours agreed with their ellipsoidal counterparts within 8%, for photons, these errors were especially large, exceeding 20% for tumours with ellipticities greater than about 0.98.

Because dosimetric errors are greatest in small, mineralized tumours, the researchers expect their study to be particularly useful in treating patients suffering from the metastatic spread of the bone cancer osteosarcoma. To help get their results into the clinic quickly, they compiled a look-up table presenting S-values for a range of tumour sizes and compositions and a comprehensive list of relevant radionuclides.

Edmond Olguin and Wesley Bolch

“As the S-values are simply the radiation dose per radionuclide decay, multiplying them by the number of decays gives us the tumour dose,” explains Wesley Bolch at the University of Florida, who led the study. “This database of S-values allows the complexities of tumour dosimetry to be simplified down to the product of two numbers!”

The team also plan to incorporate the results into a community-developed project called MIRDcalc – a software program that will calculate organ doses in nuclear medicine and X-ray CT in a single package.

Copyright © 2025 by IOP Publishing Ltd and individual contributors