This episode of the Physics World Weekly podcast features three physicists at McMaster University in Canada. They responded to COVID-19 restrictions on in-person learning by mailing out simple equipment so their students could do undergraduate lab experiments at home. Instead of just getting by with the new arrangements, Sara Cormier, Adam Fortais and Kari Dalnoki-Veress were delighted to find that their students learned new skills working at home and often did experiments with family members – giving physics a wider audience in the community.
Dalnoki-Veress also explains how he and his co-organizers of the Soft Matter Canada Symposium scrambled to put the event online after it was cancelled earlier this month. Again, much to the organizers’ delight, the Zoom-based event grew from a symposium into the much larger Soft Matter Canada Conference as more and more people signed up to participate. Indeed, the organizers now hope to hold the online event several times a year.
You can contact Dalnoki-Veress at kdalnokiveress@gmail.com to find out about the next Soft Matter event.
Optical fluorescence scans of excised cervical tissue, with the redox ratio of coenzymes shown in distinctive colours. Sections of healthy tissue, low-grade cancer and high-grade cancer show distinctive patterns that can be automatically evaluated for a rapid diagnostic result. (Courtesy: Dimitra Pouli, Tufts University)
A label-free fluorescence microscopy technique can detect the metabolic and structural signatures of cancer in epithelial tissue even before it develops. A team in the US and Spain used two-photon excitation fluorescence (TPEF) microscopy to map the presence of two metabolism-related coenzymes in cervical biopsy samples. They found that the distribution and ratio of the coenzymes vary with depth in a way that picks out changes in morphology associated with precancerous lesions. The researchers say that the method may ultimately be incorporated into routine screening to identify cancers early on, when treatment is most effective.
When a cancer has grown to the point at which it can be diagnosed from the symptoms that it causes, it might already be too late for successful treatment. Far better is to catch the cancer before it develops, but this means spotting the subtle biochemical changes that indicate that a cell’s metabolism has ramped up ready for proliferation.
While it is possible to measure these changes non-invasively, current techniques – nuclear medicine or MRI, for example – require dedicated imaging facilities and the injection of tracers. The alternative is to take biopsies for laboratory analysis, but tissue sampling sites are still typically chosen using low-specificity and sub-optimal-sensitivity visual means, and the procedure can cause pain and other side effects.
In conventional fluorescence microscopy, specific molecules emit visible light when they are excited by higher-frequency photons. In TPEF microscopy, the target molecules emit after absorbing two relatively low-energy photons that, individually, could not trigger such fluorescence. This technique has an advantage for physiological applications because the two near-infrared (NIR) photons that excite the molecules are scattered less by tissue than higher-frequency light, allowing cells beneath the tissue surface to be imaged. The depth at which the NIR beam is focused can also be varied, giving a depth-resolved map of fluorophore distribution.
Georgakoudi and her colleagues exploited these advantages to quantify in samples of cervical epithelium the presence of two molecules: the reduced form of nicotinamide adenine dinucleotide (NAD(P)H) and flavin adenine dinucleotide (FAD).
“These enzymes play an important role in several of the pathways involved in producing energy and synthesizing molecules that the cell needs to survive,” explains Georgakoudi. “The balance of the pathways that the cell utilizes to do this often changes as it becomes cancerous.”
The relative quantities of FAD and NAD(P)H therefore give a window onto cancer-related metabolic changes, but they also provide a picture of how cells are structured. Because these molecules are concentrated in mitochondria (subcellular structures found in the cells’ cytoplasm but not in their nuclei and borders), their presence can be used to infer the cytoplasmic-to-nuclear ratio – the ratio of the size of the cell nucleus to the overall cell size – and the degree to which mitochondria cluster together. In healthy epithelia, both of these properties vary significantly with depth. “That is one of the markers that the cells are differentiating (maturing) as they are normally expected to do, as we move from the deeper cell layers of the epithelium to the surface,” says Georgakoudi.
In precancerous lesions, in contrast, this normal differentiation process is disrupted, and the epithelial cells display no such depth-dependent variation. The researchers found that this lack of differentiation was detectable by TPEF microscopy. Moreover, they found that the process of tissue classification could be automated by combining cell morphology and mitochondrial organization measurements with the FAD:(FAD+NAD(P)H) ratio, which also displayed less variability with depth in precancerous tissues.
Although Georgakoudi and colleagues studied cervical epithelial tissues specifically, in which cancer is usually caused by a particular strain of human papillomavirus, they say that the same cell-morphological and biochemical patterns should be present in many epithelial cancers. To apply the technique in the clinic, however, will require advances in the delivery of high-energy pulses and improvements in image acquisition speed.
“We are starting this summer a project to develop an instrument that will enable us to test this technique in humans in the clinic within two years,” says Georgakoudi. “It will of course take a couple of years at least to go through initial testing and optimization, but there is no question that the ability to assess subtle metabolic changes in human tissues in vivo will enable new insights into the process of cancer development so that we can detect and treat it more effectively.”
There’s much ado about next to nothing, it seems, in the rarefied world of ultrahigh-vacuum (UHV) systems. Operating at pressures of 10–7 Pa and lower, UHV provides a core enabling technology for all manner of surface-science studies that rely on the interaction of photon, electron or ion beams to probe the physical and chemical properties of sample surfaces – among them X-ray photoelectron spectroscopy (XPS), low-energy electron diffraction (LEED) and secondary-ion mass spectroscopy (SIMS). At the same time, UHV conditions ensure that researchers are able to study a chemically clean sample surface free from unwanted adsorbates – also a must-have requirement for thin-film growth and preparation techniques such as molecular beam epitaxy (MBE) and UHV physical vapour deposition (PVD).
Within the UHV environment, the mechanical manipulation, positioning and preparation of the sample represent a complex engineering challenge, typically requiring analytical stages that can deliver a combination of precise linear motion along three axes (xyz) as well as rotation around one or two of those axes (polar and azimuthal). A case in point is the MultiCentre family of analytical stages from UHV Design , a specialist UK developer of UHV motion and heating products. These configurable stages offer scientific users up to five axes of motion alongside options for additional control and testing of the sample, including motorization, temperature measurement, the ability to apply a voltage (sample biasing), heating to 1200 °C, and liquid-nitrogen or liquid-helium cooling.
MultiCentre applications span fundamental research in materials science, particularly at surfaces, as well as thin-film process development and quality control.
Nick Clark, UHV Design
As such, the MultiCentres are an essential building block of analytical experimental techniques for chemical and structural analysis in the fields of thin-film fabrication, semiconductor science, catalysis and nanotechnology, amongst others. “MultiCentre applications span fundamental research in materials science, particularly at surfaces, as well as thin-film process development and quality control,” explains Nick Clark, chief scientist at UHV Design. “The stages and associated accessories are a complete solution for sample manipulation and transfer as well as preparation ahead of analysis – including removal of surface contaminants, crystallization and thin-film deposition.”
Centre stage
The MultiCentre range comprises two main product lines: the general-purpose XL-T series, a compact, single-bellows stage that’s designed specifically for surface-science chambers where space is at a premium; and the XL-R series, a dual-bellows stage with a secondary shaft support and z-axis travel up to 1000 mm – a higher-end specification for surface analytical and synchrotron end-station applications that require longer travel and enhanced stability.
One of the key features of the MultiCentre stages is the motorization of any or all axes of motion, with the emphasis on user-friendly motor assembly and disassembly. “Our motors are neat, compact and easy to remove – a big advantage when it comes to bake-out and maintenance of the UHV chamber and subsystems,” Clark explains. The approach to motorization also plays out in terms of vacuum integrity and mechanical reliability.
Nick Clark: the focus on user-friendly product design underpins the MultiCentre family of analytical stages. (Courtesy: UHV Design)
Take the four- and five-axis stages, in which the polar and azimuthal axes of rotation exploit magnetically coupled drives. “The use of magnetically coupled drives is fundamentally more reliable because you’re not twisting a bellows around to give you the rotation,” Clark adds. “Ultimately, that means less chance of a vacuum leak, while the drive components are less prone to mechanical damage.”
That focus on user-friendly product design underpins the MultiCentre offering. Many traditional stage designs, for example, require the services to be coiled around the shaft. This increases the swept radius of the stage, creates potential snagging areas and, after multiple cycles, the cooling pipes can fatigue to the point of failure. In contrast, the XL-T series uses the 65 mm internal-diameter bellows bore to route all services – including liquid-nitrogen cooling coils – to yield an uncluttered, compact design at the sample stage.
“The XL-T configuration significantly reduces the swept radius and eliminates the cycling stress on the cooling system whilst freeing up space for sources and detectors to get in close to the sample on multitechnique chambers,” notes Lukasz Rybacki, senior mechanical design engineer at UHV Design. “What’s more, the option to extend functionality when required – such as the addition of extra sample parking stages – provides an economic route to future-proofing your purchase.”
Hot stuff
For the end-user, this “scalability by design” yields significant upside. Customers can choose a four-axis MultiCentre configuration for polar rotation only, or the five-axis system if azimuthal rotation is also required. The same goes for heating and cooling services. If sample heating is needed, options for resistive heating (to 900 °C) and e-beam heating (to 1200 °C) are available.
The former employs a self-supporting tantalum foil heater (for minimum outgassing and a large ratio of heated to open surface area to ensure heater longevity). The foil is also yttria-coated to provide additional robustness in oxidizing atmospheres and for protection in the event of an accidental vent. To upgrade to e-beam heating, which gives users another 300 °C of heating, simply requires a change of power supply unit. An innovative liquid-nitrogen cryomodule provides sample cooling down to –165 °C.
MultiCentres provide continuous azimuthal rotation and temperature measurement even when cooling with liquid nitrogen and when heating to 1200 °C.
Lukasz Rybacki, UHV Design
“In addition,” says Rybacki, “the MultiCentres are unique in their ability to provide continuous azimuthal rotation and temperature measurement even when cooling with liquid nitrogen and when heating to 1200 °C. That functionality can help in the uniform growth and crystallization of thin films and the uniform removal of material during depth-profiling experiments.”
Finally, all XL-T and XL-R MultiCentre stages can be configured to accept the most common surface analysis sample holders, including pucks, flags and ESCA stubs. A lot of attention has also gone into the design of the sample holder to make sure it is virtually nonmagnetic and therefore compatible with low-energy analysis techniques – such as angle-resolved photoelectron spectroscopy (ARPES) – which are very sensitive to magnetism.
MultiCentre stages: versatile by design
UHV Design’s MultiCentre analysis stages are “a complete solution” for surface-science studies, combining sample manipulation, transfer and preparation – including removal of surface contaminants, crystallization and thin-film deposition. A selection of leading-edge applications is highlighted below.
A four-axis MultiCentre XL-T stage with heating, liquid-nitrogen cooling and deposition shielding supports an advanced UHV system capability at the Centre for Designer Quantum Materials, University of St Andrews, UK. Phil King and colleagues are investigating the electronic structure and many-body interactions of quantum materials using electron spectroscopy as well as creating novel designer quantum materials via atomic layer-by-layer growth. Using the XL-T, the researchers are able to prepare spin targets for investigation by their spin- and angle-resolved photoelectron spectroscopy (SARPES) system.
A Chinese research collaboration, headed up by Qing Huan and Kui Jin at the Institute of Physics, Chinese Academy of Sciences, Beijing, has developed a custom UHV facility to accelerate advanced materials discovery. Comprising a combinatorial laser MBE system and an in-situ scanning tunnelling microscope (STM), the six-chamber UHV system provides high-throughput film synthesis techniques and subsequent rapid characterization of surface morphology and electronic states of the resulting combinatorial thin films. The preparation chamber is installed with a customized MultiCentre analytical stage and an ion gun, allowing cycles of ion bombardment and annealing of the sample (up to 1200 °C).
Two highly customized MultiCentre stages are being put to use by the Nanoscale Processes and Measurements Group at the US National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland. A high-temperature XL-T stage (operating at up to 1200 °C and tilting along a non-standard axis) supports crystalline thin-film growth analysis using reflection high-energy diffraction (RHEED), while a high-voltage (20 kV) XL-T stage is being applied in field-ion microscopy (FIM) imaging and preparation of scanning tips for UHV, cryogenic, high-magnetic-field STM studies.
A distant object that could be smallest known black hole, or the largest known neutron star, has been spotted by the LIGO–Virgo gravitational-wave detectors. The 2.6 solar-mass object appears to have merged with a 23 solar-mass black hole, creating gravitational waves that were detected here on Earth in August 2019. Unlike a previously observed merger between two neutron stars, no electromagnetic signal was observed. At nearly 9:1, the ratio of masses of the two objects is the greatest ever detected by LIGO–Virgo.
LIGO and Virgo are three huge interferometers – two in the US and one in Italy – that have detected gravitational waves from the mergers of black holes and neutron stars for nearly five years.
Neutron stars and stellar black holes are the final stages of evolution for large stars – with black holes being more massive than neutron stars. In theory, the maximum mass of a neutron star is about 2.1 solar masses. However, there is some indirect evidence that more massive neutron stars could exist. There is little evidence for the existence of black holes smaller than about 5 solar masses, leading to a mass gap in our observations of these compact objects.
Record breaking
What is intriguing about the August 2019 merger – dubbed GW190814 – is the mass of the smaller object, which appears to fall within this gap. “Whether any objects exist in the mass gap has been an ongoing mystery in astrophysics for decades,” says Charlie Hoy of the UK’s Cardiff University, who played a key role in analysing data from the detection and writing the paper that describes the observation, which has been published in The Astrophysical Journal Letters. “What we still don’t know is whether this object is the heaviest known neutron star or the lightest known black hole, but we do know that either way it breaks a record.”
LIGO Scientific Collaboration spokesperson Patrick Brady at the University of Wisconsin, Milwaukee adds, “This is going to change how scientists talk about neutron stars and black holes. The mass gap may in fact not exist at all but may have been due to limitations in observational capabilities. Time and more observations will tell.”
According to LIGO team member Vicky Kalogera of Northwestern University in the US, the large mass ratio will encourage astrophysicists to rethink models of how such binary compact objects form. “It’s a challenge for current theoretical models to form merging pairs of compact objects with such a large mass ratio in which the low-mass partner resides in the mass gap,” she says.
I think of Pac-Man eating a little dot
Vicky Kalogera
Unlike the merger of two neutron stars that was observed by LIGO–Virgo in 2017, no electromagnetic radiation was detected with the gravitational waves of GW190814. According to LIGO-Virgo scientists there are three possible explanations for this. One is the great distance to where the merger occurred – 800 million light-years – which is about six times the distance as the 2017 neutron-star merger. Another possibility is that both objects were black holes, and no electromagnetic radiation has been detected from any black-hole mergers spotted by LIGO–Virgo. A third possible explanation is that the neutron star was “swallowed whole” by the black hole in such a way that no radiation was emitted.
“I think of Pac-Man eating a little dot,” quips Kalogera, adding “When the masses are highly asymmetric, the smaller neutron star can be eaten in one bite.”
The above video is a visualization of the period leading up to the GW190814 merger, showing the two objects and the gravitational waves they emitted.
The COVID-19 pandemic has led to a sudden increase in data sharing, multicentre image data collection, online data annotation, deep learning and the building of large repositories, according to informatics expert Peter van Ooijen, who calls for more consideration of these topics in radiology training.
“It is not ‘just’ about the deep learning itself, but also about infrastructure, legal issues, standardization, etc,” he told AuntMinnieEurope.com, noting that a multitude of initiatives from the imaging informatics side have occurred since March, although they are not always communicated as such.
“At EuSoMII [European Society of Medical Imaging Informatics], we are involved in the imagingcovid19ai.eu initiative,” added van Ooijen, who is coordinator of the Machine Learning Lab at the Data Science Center in Health (DASH) of the University Medical Center Groningen, the Netherlands.
In the radiology learning curricula, there’s a strong need to include more training sessions about imaging informatics, particularly artificial intelligence (AI), and medical students and radiologists in training are requiring a more extensive knowledge on imaging informatics issues, he said.
Peter van Ooijen.
“In the Netherlands, we are covering imaging informatics in the formal training of our residents, but also residents start their own initiatives to organize meetings on these topics,” Van Ooijen explained. “In my institution, medical students came to me asking if we could help them to learn more about AI, so they formed their own team, and at DASH, we joined up with them to increase the data science training for medical students.”
EuSoMII proposed a plan to the European Society of Radiology (ESR) that is now part of the European Diploma in Radiology (EDiR) curriculum, although no formal examination is currently available. This curriculum has a wide spectrum of topics, ranging from the standards used, such as DICOM/HL7, to the ethical issues surrounding the implementation of decision-support systems and more in-depth knowledge of deep learning.
For the ESR curriculum, EuSoMII proposed different knowledge and skill levels on imaging informatics in the formal training of radiologists – from the first year of training all the way up to a specialization in medical imaging informatics.
The impact of developments in imaging informatics on the day-to-day work of the radiology department is significant, and given the development of informatics and the implementation of an increasing number of automated software tools, the way radiologists are trained is becoming even more important, Van Ooijen and colleagues wrote in an editorial posted on 19 May by European Radiology.
“Currently, most radiologists lack knowledge and skills in the area of imaging informatics, although there is a clear will to learn about these topics,” they noted. “Studies have shown that most radiologists and residents agree that academic training in imaging informatics should be implemented, although it is also recognized that time constraints during radiology training hampers the inclusion of imaging informatics.”
Growth of radiomics
A major research trend is radiomics and texture analysis, the popularity of which is due to its symbiosis between high-throughput data and clinical decision-making, the authors continued.
Defined as a data-mining approach aiming to extract high-dimensional data in the form of a multitude of features from clinical images for building machine learning or statistical models, radiomics can be applied to various imaging modalities to answer relevant clinical questions in, for instance, head-and-neck masses, pancreatic fistulas, hip osteoporosis, lymph nodes and lung disease.
Illustration of radiomics workflow for various applications involving image acquisition, radiomic feature extraction, and model evaluation for diagnosis and prediction. (Courtesy: Yeshaswini Nagaraj and European Radiology)
“The successful application of radiomics depends on the different stages in image analysis such as image acquisition, feature extraction and model validation. Each stage needs to be carefully evaluated to achieve reliable construction of a model that can be transferred into clinical practice for the purposes of prognosis, disease prediction and evaluation of disease response to treatment,” they pointed out.
As part of the radiomics approach, machine-learning techniques can be employed to learn from given examples and detect hard-to-discern patterns from large and complex datasets. This approach leads to the selection of quantitative features that may not be straightforward for a human observer.
“The performance of radiomics models are fluctuating due to high-dimensionality features, some studies report performance that exceeds that of radiologists,” the authors observed. “One of the additional advantages of radiomics is that the outcome is shown to be less susceptible for changes in the acquisition protocol.”
Not many physicists carry a gun to defend themselves against attackers provoked by their research, but that’s exactly what Wilmer Souder once felt the need to do. Since 1911 he worked at the US National Bureau of Standards (NBS) in Washington, DC (today it is the National Institute of Standards and Technology (NIST)), eventually developing forensic techniques that convicted criminals. Souder was not the only forensic physicist in that era. John H Fisher, another ex-NBS physicist, invented a device essential for forensic firearms identification. Both of their contributions were important in major criminal trials and made a sizable impact on the justice system.
Fisher worked at the independent Bureau of Forensic Ballistics, established in 1925, where he invented the helixometer to peer inside the barrel of a firearm without sawing it in half lengthwise. His patent shows the device’s optical arrangement and graduated angular scale that allowed an investigator to examine defects in the barrel and find the pitch of its rifling – the internal spiral groove that imparts a stabilizing spin to a bullet. These features leave unique marks on bullets fired from a given weapon. Along with the double microscope for side-by-side comparison of bullets, invented at that same bureau, the helixometer made it possible to link a bullet from a crime scene to a specific weapon.
Souder earned his physics PhD in 1916, from the University of Chicago. One of his teachers was Albert Michelson, who won the 1907 Nobel Prize for Physics for the precise interferometric measurements crucial to the 1887 Michelson–Morley experiment. Souder’s PhD adviser was experimentalist Robert Millikan, who would earn the 1923 Nobel Prize for Physics, for research on the photoelectric effect and the charge on the electron. Souder published two papers with Millikan, and his dissertation about the photoelectric effect, in Physical Review.
Initially, Souder studied dental materials at NBS, to help the US Army develop treatments for soldiers – a research award in dentistry is now named after him. But another pressing need soon arose, thanks to growing criminal activity in the 1920s. Much of this was fuelled by Prohibition, the era from 1920 to 1933 when the US banned alcoholic beverages, and criminal gangs fought viciously to control illegal bootlegging. Souder’s notebooks show that he responded by providing forensic analysis of handwriting, typewriting and bullets on more than 800 criminal cases for the Department of Justice, the Treasury Department and other agencies. As the NIST researchers discovered, this resulted in an appreciative note from FBI director J Edgar Hoover, and a gun carry permit for Souder (seen above) that was justified protection for a witness in criminal trials.
These pioneering forensic approaches played roles in major cases. The historical research at NIST showed for the first time that Souder was involved in a sensational 1935 “trial of the century”. It found Bruno Hauptmann guilty of kidnapping and killing the 20-month-old son of Charles Lindbergh, famous for the first solo flight across the Atlantic Ocean in 1927. Souder’s study of the ransom notes in the case, with that of other handwriting experts, provided much of the evidence that put Hauptmann in the electric chair.
In 1932 Wilmer Souder was already calling for standards to be established for forensics equipment
Weapons identification was likewise essential in another world-famous trial. In 1921, two Italian-born anarchists, Nicola Sacco and Bartolomeo Vanzetti, were convicted of shooting and killing two men during an armed robbery in Massachusetts. The verdict was widely condemned as having been unjustly influenced by the prevailing anti-radical sentiment in the US. At a final review of the case in 1927, Calvin Goddard, head of the Bureau of Forensic Ballistics, testified that the helixometer and the comparison microscope unequivocally showed that one fatal bullet and a cartridge case came from Sacco’s pistol. Sacco and Vanzetti were executed but controversy continued, although modern bullet analysis has confirmed Goddard’s result.
These early forensic methods remain valuable, but forensic science in the US has lost some of its lustre. Reviews in 2009 and 2016 found that much of forensic practice has developed without the scientific rigour that would make it truly reliable in deciding guilt or innocence. The reviews called for improvements in forensic science, some of which are under way (see October 2019 p43). Souder, well-trained in scientific exactness, would have applauded these recommendations. The NIST researchers found that in 1932 he was already calling for standards to be established for forensics equipment, for precise forensic data and its detailed recording, and for stringent testing to qualify forensics experts.
Souder was also well aware of the difficulties in presenting scientific evidence to judges and juries who lacked scientific training. He used oversized aluminium models of bullets to illustrate ballistic methods, and in 1954, writing in Science, described how to be an effective scientific witness in court. The article ended with Souder’s rallying cry for the value of good forensic science that still resonates: “Justice is sometimes pictured as blindfolded. However, scientific evidence usually pierces the mask.”
Readers are invited to submit their own Lateral Thoughts. Articles should be 900–950 words, and can be e-mailed to pwld@ioppublishing.org
Adding a single proton to a doubly magic isotope of oxygen is enough to significantly alter its properties, an international team of physicists has discovered. Led by Tsz Leung Tang at the University of Tokyo, the researchers made the unexpected discovery after removing a proton from a neutron-rich isotope of fluorine. Their work could lead to a better understanding of the complex interactions that take place between protons and neutrons within atomic nuclei.
Basic information about how protons and neutrons interact within a nucleus can be gleaned from a nuclide chart, which plots the numbers of protons in an isotope against the number of neutrons. The “neutron drip line” in such a plot shows the maximum number of neutrons an isotope of each element can contain.
One particularly striking feature of this boundary is the sharp jump in neutrons between neighbouring oxygen and fluorine, which has one more proton than oxygen. An oxygen nucleus (containing eight protons) can contain up to 16 neutrons, however fluorine can contain as many as 22 neutrons. The reasons behind this jump are poorly understood, but researchers believe it is related to oxygen-24’s “doubly magic” nucleus, which contains extremely stable filled “shells” of protons and neutrons.
Valence and core
To explore the jump in more detail, Tang’s team prepared a beam of the isotope fluorine-25 at the Radioactive Isotope Beam Factory near Tokyo – which is run jointly by Japan’s national research institute RIKEN and the University of Tokyo. Fluorine-25 contains one more proton than oxygen-24 and can be thought of as an oxygen-24 core plus a single valence proton.
This latest research involved colliding fluorine-25 nuclei with a target to remove a proton. Using the SHARAQ detector, Tang and colleagues measured correlations between the motions of the collision products and found that around 65% of the resulting oxygen-24 nuclei were in an excited state. This is contrary to current theory, which predicts that the oxygen-24 core of fluorine-25 should exist in its lowest energy state.
This suggests that the addition of a single valence proton to oxygen-24 has a profound effect on the doubly magic core. Indeed, Tang’s team concluded fluorine-25’s excited core is likely responsible for the neutron drip line’s dramatic jump – although the reasons why such significant changes can be driven by a single proton remain a mystery.
The team now aims to uncover the physical mechanisms in future experiments. If successful, future experiments could lead to significant improvements to our understanding of the processes that occur inside atomic nuclei – and also provide new insights into the mysterious properties of neutron-rich astronomical features, including supernovae and neutron stars.
Left: 3D power-Doppler volumes overlaid on anatomical volumes of the myocardium in the left-anterior descending artery. Right: Absolute flow velocities estimated by 4D ultrafast ultrasound flow imaging. (Courtesy: Phys. Med. Biol.65 105013)
Decreased blood supply to the heart muscles, known as cardiac ischemia, can lead to chest pain or even heart attack. Cases of suspected ischemia are currently investigated using invasive coronary angiography (ICA), which provides both anatomical and functional assessment of the coronary vessel. ICA, however, is an invasive procedure that involves relatively rare – but potentially serious – risks for patients.
In the peripheral arteries, non-invasive, non-ionizing Doppler ultrasound imaging is used instead of angiography. But for cardiac applications, Doppler imaging is difficult, because of the rapid motion of the myocardium and the insufficient definition of conventional ultrasound.
To overcome this challenge, researchers at the French research unit Physics for Medicine (INSERM, ESPCI, CNRS, PSL University) recently introduced a method called ultrafast Doppler coronary angiography (UDCA), which uses 2D ultrafast ultrasound to visualize coronary vessels as small as 100 µm in a beating heart. They have now extended their UDCA approach to three dimensions, enabling 3D imaging and quantification of coronary blood flow in a single heartbeat.
“2D UDCA can quantify relative changes of coronary flow, for example between rest and stress states, but it cannot quantify the absolute coronary flow velocity,” explains co-senior author Mathieu Pernot. “With 3D UDCA, it’s a completely different story, as it provides an enormous amount of data at a very high volumetric rate. These data contains all the tissue and flow motion information that allow absolute flow velocity to be measured accurately in a few tens of milliseconds.”
In vivo evaluation
To assess their new 3D UDCA technique, Pernot and colleagues performed coronary volumetric blood flow imaging in vivo in open-chest swine experiments. They placed a 32×32 element ultrasound matrix-array probe on the animal’s left-ventricle in the region perfused by the left anterior descending (LAD) artery.
The team designed an ultrafast (1000 volumes/s) ultrasound sequence that images the coronary vasculature in 3D using power-Doppler imaging. They also employed vector Doppler analysis (4D ultrafast ultrasound flow imaging) to assess the absolute flow velocities. To estimate flow rates, they first used the 3D power-Doppler volumes to delineate the coronary vessel on 32 successive 2D slices. For each slice, they computed the flow rate by integrating the flux (rate of flow per unit area) over the cross-sectional area of the vessel. Finally, they averaged the flow rate over the different slices.
In their first set of experiments, the researchers imaged a small portion of the LAD artery in the hearts of five animals. They used 3D UDCA to assess coronary flows throughout the diastolic phase, in which the heart relaxes after contraction, in a single heartbeat.
Arterial flow visualization was most efficient when myocardial tissue motion was small – at early-diastole before myocardial relaxation and at end-diastole. During the mid-diastole phase, rapid tissue motion prevented accurate signal reconstruction. The researchers also used vector Doppler analysis to assess absolute blood flow velocity. They observed a maximal velocity of approximately 15 cm/s in the middle of the artery, decreasing towards the edges.
Next, the team examined reactive hyperaemia (the increase in blood flow following arterial occlusion) in five animals, after occluding the LAD artery for up to 90 s. The maximal flow velocity increased from about 12 cm/s to more than 20 cm/s during reactive hyperaemia, which corresponded well with the observed flow rate increase from about 70 to 120 ml/min.
The researchers also evaluated whether 3D UDCA can visualize a coronary stenosis (narrowing of the arteries) in three animals. They used an inflatable pneumatic cuff occluder positioned around the artery to create 30%, 50% and 70% narrowing of the proximal LAD artery.
Power-Doppler volumes overlaid on anatomical B-mode volumes show the induced stenosis. 4D ultrafast ultrasound flow imaging (right panel) shows the induced flow acceleration at the centre of the stenosis. (Courtesy: Phys. Med. Biol.65 105013)
Overlaying power-Doppler volumes on anatomic volumes of the myocardium revealed a reduced signal in the stenosis region, demonstrating the reduction in the epicardial diameter. Vector Doppler analysis revealed a significant flow acceleration in the centre of the stenosis, with maximal velocity of approximately 20 cm/s.
Comparing flow rates estimated by 3D UDCA with measurements from a gold-standard, invasive coronary flowmeter (placed close to the ultrasound probe) revealed good agreement during baseline, reactive hyperaemia and coronary stenosis.
Clinical potential
Writing in Physics in Medicine & Biology, the researchers conclude that 3D UDCA could have major potential as a new non-invasive tool to measure coronary flow at the patient’s bedside.
“We envision several important clinical applications for diagnosis and management of coronary artery diseases,” says Pernot. “One could be estimation of the coronary flow reserve, an important parameter for clinical decision making in coronary intervention that’s currently obtained by catheterization under ionizing imaging modalities. Because of its high sensitivity, 3D UDCA could also be used to diagnose coronary microvascular disease, which is challenging with current imaging modalities.”
The team is now working to translate 3D UDCA for human use. They note that the open-chest configuration used in this proof-of-concept study provided optimal imaging conditions, while clinical translation will require more challenging trans-thoracic or trans-oesophageal imaging. Another limitation is the small region that can be imaged, which restricts the field-of-view to a small part of a large coronary artery. “We are currently developing new approaches that could image the coronary vasculature of the entire heart,” Pernot tells Physics World.
Sense and superiority Stephen Wolfram in July 2008. (Courtesy: Stephen Wolfram’s PR team/Stephen Faust)
I need to start this review by saying that I loved the premise of this collection of essays by the physicist-turned-computer scientist Stephen Wolfram, who is chief executive of the Wolfram Group. Promising “surprising and engaging intellectual adventures”, the cover blurb of Adventures of a Computational Explorer teases “science consulting for a Hollywood movie, solving problems of AI ethics, hunting for the source of an unusual polyhedron, communicating with extraterrestrials” and even “finding the fundamental theory of physics and exploring the digits of pi”. What fun!
From supporting the production of the 2016 film Arrival by exploring how alien spacecraft might work, to considering how humanity might best leave behind a message for other civilizations (one option being the Wolfram computational language, of course), the opening chapters are perfectly pitched for the general reader. They’re all written in Wolfram’s compelling and, at its best, charmingly avuncular style. Some later chapters also deliver well on the advertised concept – covering topics such as computationally analysing the Facebook data of consenting Wolfram customers, and playfully imagining what kind of tech might be made by combining four current buzzwords (to form “Quantum Neural Blockchain AI”) .
Unfortunately, though, a key flaw of the book is that it lacks cohesion – perhaps as a result of being a compendium of seemingly loosely edited pre-existing essays. Repetition abounds, and I have no idea who the target audience for Adventures of a Computational Explorer is, with the popular accessibility of the initial chapters giving way to those that presume existing knowledge of specialist acronyms such as QCD (quantum chromodynamics), UDP (user datagram protocol) and TCP (transmission control protocol). A quick intro chapter to some of Wolfram’s key themes (computational irreducibility, cellular automata and the main Wolfram Group products) would also have provided a welcome explainer – especially given that, for Wolfram, the latter are the solution to almost all issues. If you must write a book that serves as stealth advertising for your software suites, you might as well explain what they each do, clearly, and in the first instance.
These issues might have been avoided with a stronger editorial hand – but one can imagine why this wasn’t delivered by the publisher, Wolfram Media. A more involved editor might also have reined in Wolfram’s predilection not only for promoting his products but also himself. This tendency is exhibited to such an extent that I eventually found it thoroughly off-putting. One might forgive the odd indulgence, but not multiple chapters devoted to, for example, his particular approaches to work and preferred methods of file organization.
At one point, Wolfram recapitulates his life through the lens of technology and artefacts from his considerable personal archive, beginning with a glowing elementary school report from 1967 – making for rather nauseating reading. For the reader not put off by this particular display of self-indulgence, the following chapter – “Things I learned in kindergarten” – goes further, relating tales of a six-year-old Wolfram, presumably still in knee-high socks, outsmarting adults and already realizing “obvious” things that his peers were simply incapable of.
This sentiment of superlativeness is Adventures of a Computational Explorer’s most unappealing leitmotif. Wolfram “independently came up with” data hashing functions at the age of 13 (p321); became a “card-carrying physicist” as a mere teenager; and gathered “what is probably one of the world’s largest collections of personal data” (p351). Wolfram also claims that when it comes to conceptualizing networks to represent physical space, many other physicists “haven’t quite reached the level of abstractness that [he is] at” (p29), and adds that his idiomatic ideas on how fundamental physics works just “aren’t yet mainstream” (p24).
If you must write a book that serves as stealth advertising for your software, you might as well explain what they each do clearly
The Wolfram language, meanwhile, is said to provide “a compressed representation…of the core content of our civilization” (p58), while mobile-phone-jingle-generating Wolfram Tones has “surpassed our species in musical output” (p159). While one has no doubt that many, if not all, of these assertions are accurate – Wolfram is clearly extraordinarily talented and accomplished – they need not all be expressed.
In other sections, Wolfram’s unabashed braggadocio takes on a more personal tone. The sixth chapter, originally written in 2016, pauses to take a seemingly random and unprofessional swipe at noted theoretical physicist Richard Feynman who, according to Wolfram, “came to a bunch of the meetings I had to discuss the design of the SMP [Symbolic Manipulation Program], offering various ideas – which I had to admit I considered hacky”, he writes. A later chapter reveals that Feynman was an examiner during Wolfram’s thesis defence in 1979, and the pair apparently had what Wolfram (euphemistically?) calls a “rather spirited discussion”. One can’t help but wonder if there isn’t a little grudge there. The rest of that chapter, meanwhile, is devoted to the task of demonstrating that Wolfram’s record of being the youngest person to graduate from the California University of Technology had not been superseded by a close contender.
This, really, is the crux of the problem with Adventures of a Computational Explorer. There was a great book to be cooked up here, but the meat of it has been completely drowned by the sauce of Wolfram’s unbridled and embarrassing self-promotion. Give this one a miss.
Why is important to study the thermodynamics of D-Wave quantum processors?
Michele Campisi: Quantum technology has just entered the “noisy intermediate-scale quantum” (NISQ) era. This is characterized by the ability to fabricate hardware with hundreds or even thousands (in the case of D-Wave) of qubits, but also the inability to control the qubits with the degree of accuracy and fidelity that is necessary to accomplish fault-tolerant quantum computations.
One of the main obstacles on the path to fault-tolerant quantum computations is noise – for example, the thermal noise that results from the interaction of the qubits with the substrate on which they are patterned. Understanding and quantifying the thermal phenomena that accompany the operation of a NISQ device is therefore crucial in the present stage of their development.
What is reverse annealing?
Lorenzo Buffoni: In a generic annealing process, you slowly drive a quantum device so as to change in time the Hamiltonian that describes its dynamics. In forward annealing you start from some Hamiltonian, call it Hx, and end-up at some other Hamiltonian, say Hz, that does not commute with Hx. The presence of non-commuting terms during the evolution results in purely quantum phenomena, such as quantum tunnelling.
In reverse annealing you start with Hz; then turn on an external control field to introduce an Hx term in the Hamilitonian (thus causing quantum dynamics); and then you go back to Hz. Such reverse annealing protocols have been recently added to the set of operations that can be run on D-Wave because they can be used to perform local searches in the neighbourhood of a solution encoded in an eigenstate of Hz. We chose to do reverse annealing because it is the only choice for which the D-Wave interface gives you full freedom to prepare any eigenstate of your initial Hamiltonian (Hz, in our case) and that was a crucial requirement for our experiments.
How did you perform your experiments using D-Wave’s Leap service?
LB: With D-Wave’s Leap service, virtually everyone can access one of the quantum annealers that D-Wave hosts locally via a cloud service. The devices can be programmed using D-Wave’s own APIs, which are easy to embed in a Python script. The APIs are well documented, and D-Wave provides a variety of examples and demos to get started.
Once you have programmed your own experiment, the script running on your computer automatically connects to the selected D-Wave quantum annealer, runs your programs on it, and gives the program output back to your computer, all in a matter of seconds. For those interested in the implementation details of our experiments and/or willing to try out this service, we have open-sourced our code.
You conclude that the D-Wave system acts as a “thermal accelerator”, what do you mean by this?
LB: In order to characterize the device from a thermodynamic point of view we have prepared it in a hot (high-temperature) state. Then we have studied how much energy it exchanges in the form of work — exchanged with the external control electrical fields– and in the form of heat, which is exchanged with the cold chip substrate.
Thermodynamics allows only four possible ways for this to occur:
The device gives away energy both to the work source and to the cold thermal source: that is what standard heat engines do
The device receives energy both from the work source and from the cold substrate: in that case it would function as a refrigerator
Both the device and the cold substrate gain energy from the work source, so the device operates as a “heater”
The hot device loses energy to the cold substrate, while the work source spends energy to accelerate that natural energy flow, hence the expression “thermal accelerator”
MC: It is important to stress that how to precisely quantify the actual heat and work involved during the operation of the D-Wave processor is a challenging, yet unsolved, problem. However, by using recent results in non-equilibrium thermodynamics, we were able to put quantitative bounds on the heat and work, and that was sufficient to tell us that thermal acceleration was occurring. Due to its generality, our method can be used in the thermodynamic study of other quantum devices as well.
Does you research have any implications for how D-Wave systems are used to solve problems?
MC: Our work suggests that in the specific case of D-Wave, thermal noise might indeed be beneficial. In quantum annealing you want the processor to follow a path towards a specific target state. It is very much the same as having to walk from A to B while holding a pendulum and you want to reach B without setting the pendulum into oscillation. The D-wave strategy is to achieve that by walking very slowly.
Imagine however that you are now walking through a very viscous fluid, you spend now more energy to walk through it, but it is easier to prevent the pendulum from oscillating, because oscillations are quickly dumped. That is indeed what we have observed in our experiments. We have thus learned that in designing optimal annealing paths, it might be useful to go through the more “viscous” regions, you pay an energetic cost to traverse them, but you might be able to do that faster, and more reliably. Hopefully, our work will trigger further investigation in this new direction.