Electronic waste is an increasingly serious problem, not least because it is hard to recycle the silicon-based components that make up the bulk of consumer electronic devices. Researchers at Duke University in the US have now taken a step towards remedying this by creating the first fully recyclable transistors made from carbon-based “inks” that can be printed on paper or another environmentally friendly substrate. While these devices are unlikely to replace their silicon cousins any time soon, they could find their way into specialist applications such as environmental sensors or biomedical sensing patches relatively quickly.
Electronics containing carbon-only components could be ideal for making printable, recyclable devices. Semiconducting carbon nanotubes (rolled up sheets of carbon) and conducting graphene (a sheet of carbon just one atom thick) are both good candidates for making such components. Another carbon-based compound, cellulose, has previously been employed as both a substrate and dielectric, and has the advantage of being both naturally biodegradable and the most abundant polymer on Earth.
All-carbon printed electronics, however, are few and far between because there aren’t many carbon-based dielectrics that can be processed in solution – a prerequisite for printing devices. While cellulose paper can work as a dielectric in high-power transformers, crystalline nanocellulose is generally used as a binder or in conjunction with another material that has a high dielectric constant, not as a standalone printable dielectric.
All-carbon transistors
To make its all-carbon transistor, the Duke team led by Aaron Franklin put all these ingredients together for the first time, using crystalline nanocellulose extracted from wood fibres as a dielectric; carbon nanotubes as a semiconductor; and graphene as a conductor. After making inks from these three components, the researchers showed they could directly print them onto a paper substrate using a technique called aerosol jet printing at room temperature. By adding salt (sodium chloride) to the dielectric, they obtained devices with an on-current of 87 μA/mm and a subthreshold swing of 132 mV/dec, values that will allow the transistors to be employed in a wide variety of applications.
The team showed that their transistors could be recycled by submerging them in a series of ultrasonic baths and then centrifuging the resulting solution. They recovered the CNTs and graphene with over 95% efficiency, then reused the materials to reprint new transistors. The nanocellulose, which is naturally biodegradable, can also be recycled, as can the paper substrate.
Nanocellulose ink hits the spot
Franklin points out that nanocellulose has been used to make recyclable packaging for years. However, while its potential applications as an insulator in electronics were widely understood, nobody had figured out how to use it in a printable ink before.
Demonstrating such a fully recyclable printed transistor is, he adds, a first step towards using the technology to make simple commercial devices. Two possibilities for early devices include environmental sensors (to measure energy consumption of a building, for example) or customized biosensing patches for monitoring medical conditions. Indeed, the researchers, who detail their work in Nature Electronics, have already made a fully printed, paper-based biosensor for sensing lactate from their transistor.
The researchers hope their results will increase interest in areas of materials and electronics research that focus on recyclability and novel fabrication techniques such as printing. “We now plan to further improve on the ink formulations and resultant performance of these printed electronics, reduce dependence on harsh chemicals in the process, and study a variety of applications these devices make possible,” Franklin tells Physics World.
A new explanation for how white dwarf stars explode as type Ia supernovae (SNe Ia) has been proposed by astrophysicists in Brazil and Mexico. Their model suggests that the explosions are ignited when primordial black holes (PBHs) collide with white dwarfs.
PBHs are hypothetical black holes that are about as massive as an asteroid and are believed to be left over from the universe’s earliest moments. PBHs are also candidates for dark matter, so the model provides a potential link between SNe Ia observations and dark matter.
The research was done by Heinrich Steigerwald at the Centre for Astrophysics and Cosmology of the Federal University of Espírito Santo, Brazil, and Emilio Tejeda at the Institute of Physics and Mathematics, Universidad Michoacana de San Nicolás de Hidalgo, Mexico.
White dwarfs are dense stars at the end of their lifetimes that no longer undergo nuclear fusion. If a white dwarf accretes material from a companion star, its mass will increase until it reaches the Chandrasekhar limit – about 1.4 solar masses. At this point fusion switches on in a runaway process that causes a spectacular SNe Ia explosion. The merger of two white dwarfs can also result in a type Ia supernovae and astrophysicists have discovered that “sub-Chandrasekhar” white dwarfs below 1.4 solar masses can ignite as well.
Mystery ignition
Although SNe Ia are routinely observed – and used to measure distances in the universe – astrophysicists do not understand exactly how the explosions are ignited.
“We know that white dwarfs exist, and current models describe them in a satisfactory manner. We also know that SNe Ia arise from the explosion of white dwarfs, though what causes [the explosion] remains a mystery,” Steigerwald tells Physics World. “We have absolutely no idea if these asteroid mass PBHs exist yet, but if they do then our study argues that they are the reason for SNe Ia explosions.”
A PBH with a mass of about 1020 kg would be about one micron in size. If such an object were to encounter a white dwarf, Steigerwald and Tejeda say that it would be accelerated by the gravitational influence of the star to around a 20th of the speed of light. At such an incredible speed, Steigerwald says the black hole would cut through the white dwarf like a bullet through butter – which could lead to the detonation of the star.
Their model predicts that ignition takes place within a tiny region no larger than a square millimetre in the wake of the PBH as it passes through the white dwarf. “The rest is well-known physics,” Steigerwald explains. “Once ignited, the white dwarf star explodes within a few seconds. Most of the carbon and oxygen is fused to heavier elements, some of which are radioactive.”
If the mechanism suggested by the team can be verified it could also support the idea that dark matter comprises primordial black holes. This is because the abundance of observed SNe Ia corresponds to the predicted abundance of dark matter.
“Remarkable coincidence”
“Our work has shown that asteroid mass PBHs with a mass of between 1019 and 1023 kg, can ignite SNe Ia from white dwarfs,” he explains. “If the mass of PBHs is about 1020 kg, then fortuitous encounters with white dwarfs can account for both the [observed] rate of SNe Ia – roughly one per galaxy per century – and the brightness distribution of these events. “This is quite a remarkable coincidence,” he adds.
Another important aspect of the research is that it provides a mechanism for the ignition of sub-Chandrasekhar white dwarfs – something that had not been known before.
“I really like this suggestion and the highly creative out-of-the-box thinking of [Steigerwald and Tejeda],” says Robert Fisher at the University of Massachusetts Dartmouth. He adds that though the model should be treated cautiously, it could help place constraints on primordial black holes.
Steigerwald says that moving the research forward will require two things. One is success in the search for PBHs: “In the next decades, space-based gravitational-wave observatories, like the upcoming Laser Interferometer Space Antenna (LISA), could observe the stochastic gravitational-wave background that these primordial black holes would produce during their formation in the early universe”.
He also points out that powerful numerical simulations of the ignition process are required. “The ultimate confirmation of the mechanism to work out or fail would be its demonstration with full reactive numerical simulations in three dimensions. “This is a very difficult task, but I am aware of some [research] groups that I believe have the capacities to do it.”
The research is described in a preprint on arXiv and a paper is expected to be published in Physical Review Letters.
This Sunday, 16 May, is the UNESCO International Day of Light so this episode of the Physics World Weekly podcast focuses on the humble photon and some of the amazing science and technology that it makes possible.
Our first guest is the astronomer Megan Tannock of Canada’s University of Western Ontario, who talks about brown dwarfs – objects that are too small to be stars, but are larger than planets. She explains how researchers observe the weather on brown dwarfs to determine how fast the objects are spinning – which turns out to be very fast, according to a recent study by Tannock and colleagues. She also talks about whether brown dwarfs could have planets of their own and whether some of these planets could harbour life.
From the dim worlds of brown dwarfs to one of the brightest lights on Earth, our next guest is Colin Danson who does plasma research at the Orion laser facility in the UK. Orion is in the world’s top 10 for laser energy and power and can used to create matter in extreme states – so it can be used to simulate the dense cores of brown dwarfs and even exploding stars. Danson talks about some of the exciting research done at the facility today and looks forward to the next generation of high-powered lasers.
Finally, Physics World editors chat about the International Day of Light and highlight some of their favourite light-related research that we have reported recently.
The Elekta Unity MR-Linac is among a new generation of MR-guided radiotherapy (MR/RT) systems transforming patient care and treatment outcomes in the radiation oncology clinic. Think online image guidance and adaptive radiotherapy tailored to the unique requirements of each patient – adjusting radiation delivery “on the fly” to address the daily variation in the tumour and surrounding healthy tissue while allowing adaptation of the plan for tumours that respond rapidly to treatment (as well as those that prove unresponsive to standard doses of radiation).
If that’s the back-story, it’s already evident that MR/RT is poised to drive ongoing innovation and transformation along the continuum that is treatment planning, delivery and management. Most notably, while enabling the clinician to visualize a tumour target, and its adjacent anatomy, with exceptional soft-tissue contrast both prior to and during treatment, MR/RT systems also have the capacity to acquire functional and quantitative images. It’s this capability, in turn, that points the way to the long-anticipated end-game: the fusion of biological targeting and adaptive radiotherapy – otherwise known as biological image-guided adaptive radiotherapy (BIGART).
Think big, think BIGART
So what might BIGART look like in terms of a next-generation radiation oncology workflow? Put simply, with the help of frequent anatomical and functional imaging, it is hoped that MR/RT systems will be able to monitor changes in the volume, shape and biological characteristics of the tumour so that the treatment plan can be updated regularly in line with the observed treatment response. “A new era in cancer treatment is coming into view,” according to Uulke van der Heide, group leader at the Netherlands Cancer Institute (NKI) and professor of imaging technology in radiation oncology at the Leiden University Medical Center. “In that sense,” he adds, “MR/RT technology is a game-changer, giving us the platform we need to undertake clinical studies of biological targeting for adaptive radiotherapy.”
Van der Heide, for his part, is at the clinical sharp-end of the BIGART development effort. Within the Elekta MR-Linac Consortium, for example, he heads up a working group on quantitative imaging biomarkers (QIBs), a range of metrics spanning tumour morphology, biology and function that could one day inform routine assessment of treatment response during radiotherapy. Currently, the QIB working group comprises more than 15 cancer treatment centres across Europe, the US, Canada and Asia – all of them Unity clinics and all of them aligned with the broader Consortium remit to drive improved patient outcomes through the application of MR-Linac technology.
Uulke van der Heide: “A new era in cancer treatment is coming into view.” (Courtesy: NKI)
In terms of specifics, the QIB collaboration is active in developing clinical trial strategies, quality assurance programmes and data acquisition/analysis methods to fast-track BIGART research on the Unity MR-Linac platform. “The starting point, of course, is to identify QIBs that show changes early during treatment and in turn are predictive of treatment outcome,” explains van der Heide. “The first pilot studies are encouraging and show that repeated QIB measurements are feasible using a range of quantitative MRI [qMRI] techniques during patient treatment on the Unity system.”
The opportunity, it seems, lies in the diversity of qMRI options available to clinicians – and, by extension, the matrix of radiobiological insights that could over time support online adaptation of radiotherapy treatment planning. Diffusion-weighted imaging (DWI) is the most studied qMRI technique in this regard, yielding data on the cellular density of tumour tissues (with reductions in density linked to the breakdown of cell membranes and necrosis during radiotherapy). Another qMRI modality showing early promise is intravoxel incoherent motion (IVIM), which has the potential to track changes in tissue perfusion and vascular permeability in the tumour microenvironment.
Taken together, there are already multiple – and proliferating – lines of qMRI enquiry with significant clinical potential. “The combination of qMRI techniques reflects the richness and complexity of tumour tissues,” says van der Heide, “and is definitely something we will be pursuing in a clinical setting. One qMRI modality is not going to cut it for biological targeting – a multimodal strategy will be key.”
A collective endeavour
Over the longer term, however, several conditions need to be met if researchers are to maximize the clinical benefits of qMRI. For starters, it’s important to know how to relate measurements from the MR-Linac to measurements taken on diagnostic MR scanners outside the MR/RT domain – knowledge that will ultimately enable clinical studies to include measurements from before treatment when considering the patient treatment response.
To achieve this goal, the QIB working group has adjusted the qMRI measurement protocols on the MR-Linac to consider the detailed differences in MR hardware implementation. “We have done a set of multicentre studies – using digital and physical phantoms – to demonstrate accuracy and reproducibility of the MR-Linac qMRI measurements,” notes van der Heide. “The accuracy is similar to the previously reported literature for diagnostic scanners.”
Equally important is ensuring a high level of confidence that measurements performed on one MR-Linac can be related to any other MR-Linac. As such, the underlying hardware needs to be consistent across different systems, while different research teams must also use standardized qMRI measurement protocols. The QIB working group is currently developing these protocols for its network of cancer treatment centres.
Another focus of the working group is to understand, by assessing the repeatability of qMRI measurements using test-retest studies, which changes in qMRI values can be linked to the effects of radiotherapy. Finally, van der Heide and his colleagues are in the process of establishing QA best practice for the QIB programme. “The MR-Linac Consortium is growing and we expect more cancer centres to get involved in these studies,” adds van der Heide. “It’s therefore essential to have a managed and standardized QA programme to support the qMRI development effort across all participating centres.”
In the same way, the QIB working group clearly benefits from the fact that all members are Elekta Unity users. “The common MR-Linac platform will take away a lot of the multicentre variability during clinical trials,” van der Heide concludes. “Over time, though, we hope to join up our QIB development efforts with those of the ViewRay MRIdian community.”
The future’s bright, it seems, for qMRI and BIGART. Clinical trials are on the MR-Linac Consortium’s development roadmap in the next two to three years, with the first step likely to involve an investigation of daily changes in qMRI values in different tumour sites across multiple oncology clinics.
Globules of crystallizing minerals can spontaneously eject themselves from a salt solution as they evaporate. This unexpected phenomenon, which was observed by researchers at the Massachusetts Institute of Technology (MIT), might be harnessed to prevent damage to pipes and other structures in prolonged contact with seawater. According to study leader Kripa Varanasi, the effect might even allow untreated salty water to be used in certain industrial cooling systems.
When precipitants such as salts and other dissolved minerals accumulate on a material, damage known as crystal or mineral fouling frequently results. This damage poses a problem for water treatment, thermoelectric power production and many other industrial processes. Not only does it lower the efficiency of these processes, it also necessitates often expensive solutions, such as water pre-treatment, to counter the effect.
Spindly “leg-like” structures
In their new work, Varanasi and his colleagues Samantha McBride and Henri-Louis Girard studied how 5 μl water drops containing sodium chloride (NaCl) evaporate from a superhydrophobic surface heated to 90 °C. This surface, or “nanograss” as they call it, is covered in nano-sized pointed features and valleys.
They found that during evaporation, the evaporating salts initially form a spherical globe around a droplet – nothing unusual there. During the last stage of evaporation, however, the researchers were surprised to see these spherical shells suddenly lift themselves up on spindly leg-like structures from different contact points on the surface. The process then repeats, producing multilegged shapes that (depending on how you look at them) resemble an octopus, an elephant or jellyfish. The researchers call these structures “crystal critters”.
The narrow legs supporting these critters continue to grow upwards from the contact points until all the water in the drop has evaporated. The legs then begin to taper off, as there is less water (and therefore less dissolved salt) near the end of the evaporation process, until the structure resembles an icicle balanced on its tip. Eventually, the legs become so long that they cannot support the critter’s weight, so the critter breaks off and falls away.
Controlled critter ejection
The researchers found that the rate at which the crystallized protrusions grow increases with temperature. This suggests that the lift-off process could be accelerated to minimize the time the crystals spend on the surface. The researchers also showed that they could make the critters roll in a specific direction by creating heat gradients across the surface. Because the legs grow shorter on the side that is cooler (and longer on the side that is warmer), the crystal structure tends to tip and roll in the direction of lower temperature over time. As the remaining water continues to evaporate, new legs then form at the crystal’s second location. Indeed, crystal structures can roll two or even three times before the drop fully evaporates.
The effect also depends on the texture and length scale of the features on the hydrophobic surface, McBride explains. If these parameters are not carefully controlled, the saltwater drop will simply become trapped inside the nanotexture and form a sphere without ever lifting up. For example, a nanograss with ridges measuring between 0.1 and 1 μm forms critters, but one with regularly spaced microposts 10 μm long shows no vertical growth or self-ejection, despite having a contact angle similar to the one with the nanograss texture. Experiments on a superhydrophobic microhole substrate with 10 μm square holes also failed to form crystal critters, McBride says.
Limiting scaling formation
Varanasi, a mechanical engineer, suggests the newly discovered effect could help limit the formation of mineral scale inside pipes, where it can cause blockages. It could also be useful for other metal structures in a marine environment or that are exposed to seawater on land. For example, heat exchangers become much less efficient when their surfaces are fouled, and Varanasi notes that surface fouling is a major issue in water distribution pipes, geothermal wells, desalination plants and many renewable energy systems.
“The good news is that the methods for making the textured superhydrophobic surfaces are already well developed, so implementing the effect we have observed on the industrial scale should be relatively rapid,” Varanasi says. He adds that exploiting the effect could even make it possible to use salty water for cooling systems that would otherwise require valuable and often limited freshwater supplies. Using seawater for cooling where possible would, of course, be better for the environment.
The researchers, who report their work in Science Advances, say they now plan to continue studying the unusual self-ejection in other nanotextured materials. “This will allow us to better understand how to implement the effect in real-world systems, including heat exchange surfaces,” Varanasi tells Physics World.
This reflexive action that is indispensable to life may also be a way to detect cancer – each breath we exhale contains thousands of molecules that provide information about our health.
Detecting lung cancer
Lung cancer was the leading cause of cancer-related deaths worldwide last year. And because symptoms often don’t appear until the disease is advanced, detecting lung cancer early is critical.
Currently, doctors screen people who are at high risk of getting lung cancer using X-ray or low-dose computed tomography (CT) images. Though research suggests that low-dose CT may reduce lung cancer-related mortality by 20%, with recent studies confirming this result, low-dose CT has some risks, including frequent false positives that suggest a person has lung cancer when they do not.
Mantang Qiu, a thoracic surgeon and researcher, and his colleagues at Peking University People’s Hospital combined high-pressure photon ionization time-of-flight mass spectrometry (TOFMS) and a support vector machine algorithm to classify patients as having lung cancer or not only using patients’ exhaled breath. They report the results of their research in a JAMA Network Open paper.
Accelerating electrons to analyse exhaled breath
Unlike other breath tests that try to detect cancer, high-pressure photon ionization TOFMS is sensitive and fast and detects exhaled breath directly, without any pre-processing.
The technique “can sniff the ‘fingerprint’ of each breath sample and tell which one is from lung cancer,” explains Qiu.
In the high-pressure photon ionization TOFMS system (manufactured by Shenzhen Breax Biological Technology Company Ltd), a vacuum ultraviolet lamp bombards an exhaled breath sample with electrons, ionizing some of the exhaled breath molecules. Then, an electric field accelerates the ions toward a detector, which measures the time it took the ions to reach the detector. With these time-of-flight data, the ions are separated according to their mass-to-charge ratio. Researchers can then use mass spectra created using the time-of-flight data to identify molecules and chemical compounds in each exhaled breath sample.
Detecting lung cancer with high-pressure photon ionization TOFMS
Qiu and his colleagues collected exhaled breath samples from 139 patients who had lung cancer and 289 people who did not, according to CT and low-dose CT imaging scans, respectively. Those people with lung cancer then had surgery to biopsy, remove and study their tumours. Low-dose CT images from people without lung cancer were reviewed first by an artificial intelligence program and then confirmed by a radiologist. People with benign pulmonary nodules in their lungs were not included in the study. Room air was also collected to ensure that environmental effects were not seen in the results.
The researchers processed all exhaled breath samples using high-pressure photon ionization TOFMS. Then, they put the results of the high-pressure photon ionization TOFMS studies and each participant’s lung cancer status (lung cancer or not) into a support vector machine – a supervised learning model that analyses and classifies data.
The researchers’ support vector machine performed well: it was accurate, always detected lung cancer, and estimated how likely people without disease could be correctly identified in a validation dataset. These results suggest that high-pressure photon ionization TOFMS and the support vector machine could be used to non-invasively and quickly identify people who have lung cancer without requiring CT machines or other equipment.
“This is the first step toward using high-pressure photon ionization TOFMS for lung cancer detection,” says Qiu.
The researchers now are working to identify specific molecules from exhaled breath that are unique to people who have lung cancer. They also want to integrate high-pressure photon ionization TOFMS into current clinical workflows.
Multimessenger observations of neutron stars have been used by astrophysicists in the US to put Einstein’s general theory of relativity to the test – and the 106 year old theory has passed with flying colours.
A neutron star is the dense, core remnant of a massive star that has exploded as a supernova. Containing more mass than the Sun but only spanning 10–12 km in radius, neutron stars are incredibly dense and generate huge gravitational fields. These extreme conditions provide a laboratory for testing both the Standard Model of particle physics and general relativity.
A prime goal of neutron-star research is to determine the equation of state, which is the relationship between a star’s mass and its radius. It depends upon the nature of the matter inside a neutron star (be it neutrons, a quark–gluon plasma, or a more exotic type of particle such as hyperons) as well as the star’s energy density and internal pressure.
“A neutron star’s mass and radius are highly sensitive to both the equation of state and the gravitational theory used to model the star,” says Hector Silva of the Max Planck Institute for Gravitational Physics in Potsdam, who adds that this interrelatedness has been a “stumbling block” in efforts to test gravity using just the bulk properties of neutron stars.
Lovely relations
A breakthrough came 2013 when Nicolás Yunes of the University of Illinois at Urbana-Champaign, and Kent Yagi of the University of Virginia, discovered the “I-Love-Q” relations. The relations vary depending on which model of gravity you subscribe to, but in general they show how three of a neutron star’s bulk properties relate to one another. One property is the neutron star’s moment of inertia and another is its Love tidal number. The latter describes the rigidity of a neutron star, and therefore how easily it deforms in the gravitational field of a companion object – which is an important factor in binary-neutron-star mergers. The third property is the quadrupole moment, which defines how the star’s mass is distributed across its oblate shape.
Now Silva and Yunes, along with A Miguel Holgado of Carnegie Mellon University in Pennsylvania and Alejandro Cárdenas-Avendaño of the University of Illinois at Urbana-Champaign, have applied their model to real neutron stars with a little help from the Neutron Star Interior Composition Explorer (NICER) experiment on board the International Space Station, and the LIGO/Virgo gravitational-wave detectors.
In 2019 NICER directly measured the mass and radius of the isolated neutron star PSR J0030+0451, irrespective of the equation of state. Now, Silva, Yunes, Holgado and Cárdenas-Avendaño used these measurements to calculate the star’s moment of inertia, and then used the I-Love-Q relation to derive the Love number and quadrupole moment. Meanwhile, gravitational-wave measurements of the neutron-star merger GW 170817 provided an independent measure of the Love number for a neutron star with similar mass to PSR J0030+0451. Knowing these two values permitted a test of general relativity.
“The test is to check whether the inferred value of the Love number from the ‘I-Love’ relation is the same as the one measured with LIGO,” Yunes tells Physics World. “If it is, you’ve passed the test. If it isn’t, then it’s a sign of deviation from general relativity.”
Gravity’s mirror image
One application of the test is to constrain a property known as gravitational parity, explains Yunes. In physics, parity refers to mirror symmetry: the idea that something behaves the same way when reflected in a mirror. For example, when particles such as kaons decay, we would expect the same amount of decay products as their mirror image, but this doesn’t occur when parity is violated in the Standard Model.
In general relativity, gravitational parity should be preserved. However, if a modified form of gravity were at work inside a neutron star it would not necessarily preserve parity. This deviation from general relativity would be detectable in the polarization of gravitational waves measured by LIGO/Virgo, or in the frequency of gravitational waves emitted by binary black holes.
In this case, general relativity successfully passed the test. “Our result means that parity is preserved in gravity at the scale of neutron stars,” says Yunes. The next step, he says, would be to test for gravitational parity in an even more extreme environment, such as that of the inspiral and merger of black holes.
The team’s findings would not be possible without the onset of the new era of multimessenger astronomy – our ability to study astronomical objects not just in electromagnetic waves, but also in gravitational waves.
“It is quite nice that one has to combine neutron star observations in electromagnetic radiation and gravitational waves to perform this test,” says Silva. “This possibility was out of reach before 2019 and highlights the importance of using multimessenger observations to learn more about physics.”
Detailed view Jessica Esquivel is seeking to measure the muon’s magnetic moment at the Muon g-2 experiment, with only a 140 part per billion error. That’s like 7128 puzzles, with each puzzle each having 1000 pieces and only having one missing puzzle piece in total. (Courtesy: Fermilab)
As a postdoc at the Fermi National Accelerator Laboratory (Fermilab), I was interested to find out we have a long history of implementing neural networks, a subset of AI, on particle physics data. In May 1990, when I was a two-year-old focused on classifying the sounds of various animals on Old McDonald’s farm, physicist Bruce Denby was hosting the Neural Network Tutorial for High Energy Physicists conference. Fast forward to 2016 and we see the first particle-physics collaboration, the NOvA neutrino detector, publishing its work (JINST11 P09001) using “convolutional neural networks” (CNNs) – a type of image-recognition neural network that is based on the human visual system. My own graduate research “Muon/pion separation using convolutional neural networks for the MicroBooNE charged current inclusive cross section measurement” (DOI: 10.2172/1437290), published in 2018, also featured CNNs.
Over the last three decades, the particle-physics community has welcomed AI with open arms. Indeed, high-energy physicists Matthew Feickert and Benjamin Nachman have set up a collection of all particle-physics research that exploits machine-learning (ML) algorithms – A Living Review of Machine Learning for Particle Physics – which now includes more than 350 papers. ML algorithms are also being applied to particle identification, from gigantic detectors at the Large Hadron Collider at CERN to neutrino experiments including NoVA and MicroBooNE. Indeed, the ML algorithm that NoVA implemented actually improved its experiment comparable to if had collected 30% more data.
But the benefits of AI in particle physics don’t stop there. We use particle-interaction simulations to develop data-analysis tools, such as tracking and calibration, as well as to compare theoretical models of interactions within the Standard Model of particle physics. By implementing AI for model optimization, and using knowledge of “gauge symmetries”, Massachusetts Institute of Technology physicist Phiala Shanaha has noticed significant gains in efficiency and precision of numerical calculations (Phys. Rev. Lett.125 121601).
AI has also, in turn, benefited from the particle-physics community. CNNs are very good at pattern recognition in 2D space, but falter at higher dimensions. To help CNNs detect pixels on 3D shapes such as spheres and other curved surfaces, Taco Cohen at the University of Amsterdam and colleagues have built a new theoretical “gauge-equivariant” CNN framework that can identify patterns on any kind of geometric surface. Gauge CNNs resolve the 2D-to-3D problem by encoding gauge covariance, a standard theoretical assumption in particle physics. This ever expanding partnership between AI and particle physics is benefiting both fields tremendously.
What are the disadvantages of using AI?
Many of the ML algorithms that have been developed focus on point predictions. As you train a neural network, it maps a datapoint to its associated label. Through this training, a generalized predictor is created that can be used on datapoints outside the training set. For example, say we have a data sample of images and associated labels. The images would be my X dataset, from X1 to Xn and the labels would be my Y dataset, from Y1 to Yn. After training a neural network, we’d have a predictor that has learned to map X to Y. We can then use this predictor on any datapoint Xn+1, and a prediction Yn+1 will be made. The issue with a prediction is that we get a guess at what Xn+1 is, but we don’t get a quantified uncertainty with this guess. Being able to calculate uncertainties in particle physics is crucial. For instance, on the Muon g-2 experiment that I am currently working on, we are trying to get a measurement of the muon’s magnetic moment, with only a 140 part per billion error. That’s like 7128 puzzles, with each puzzle each having 1000 pieces and only having one missing puzzle piece in total.
Our expertise in quantifying uncertainties could be of great benefit to the AI community just as their expertise in building AI architectures would be useful for us. The current uncertainty around predictions has become more and more prevalent in the AI community, as is highlighted by racist faux pas such as Google Images labelling photos of Black individuals as gorillas, and the tech conglomerate has yet to implement a long-term fix to this issue. Implementing uncertainties to prediction labels, like that of the label “gorilla”, could have potentially prevented this mislabelling. Recent attempts at quantifying uncertainties are under way, such as the work by statistician Rina Barber and colleagues, which trains N predictors, with each predictor having one datapoint removed from training dataset (Ann. Statist.49 486). This method of uncertainty quantification is computationally expensive, but particle physicists are experts in navigating big datasets. As we have a vested interest in uncertainty implementation on AI tools, we may have insight into how to optimize such algorithms.
Another disadvantage of particle physicists using AI, as Charles Brown and I wrote in an online article for Physics World last year, is the lack of ethical discussions about the impact our work in AI may have on society. It’s easy for the particle-physics community to believe the false narrative that the research we do and the tools we develop will only be used for particle-physics research, when historically we’ve seen time and time again – be it the Manhattan Project or the development of the World Wide Web – that the work we do translates to society as a whole. As US theoretical cosmologist Chanda Prescod-Weinstein says, “I’m talking about integrating into a scientific culture that has accepted the production of death as a tangential, necessary evil in order to gain funding. One that will march for science without asking what science does for or to the most marginalized people. One that still doesn’t teach ethics or critical history to its practitioners.”
Not only are there practical disadvantages of using AI in particle physics, like quantifying uncertainties, in focusing our collective intellect in advancing and developing new AI tools without conversations about the moral implications of them, we run the risk of contributing to the oppression of marginalized people globally.
Signal to noise A MicroBooNE neutrino data event, showing the neutrino-argon interaction point (vertex) and tracks left behind by secondary particles, the long track made by a muon. (Courtesy: Fermilab)
How do you use AI in your research?
My graduate research used CNNs to classify neutrino–argon interactions called charged-current neutrino interactions. This signal is characterized by a common origination point (called the vertex) of multiple tracks, one of which is a relatively long track made by a muon particle. A background that contaminates this signal at low energies is neutral-current neutrino interactions, where the long track is made by a pion particle. Before my work, the only way of separating this signal from background was to apply a track length cut of 75 cm, which is the pion stopping distance (see image, above). My research focused on training a CNN on muon and pion data for use in separating signal from background to recover the charged-current neutrino interactions below the standard 75 cm track length cut. I’m now working on developing AI algorithms for better beam storage and optimization in the Muon g-2 experiment, which recently hit the headlines for showing a disparity between the predicted and measured values of the muon’s magnetic moment.
In the future, what specific areas of particle physics will AI be most useful and most crucial in moving forward?
In August 2020 the US Department of Energy (DOE) committed $37m of funding to support research and development in AI and ML methods to handle data and operations at DOE scientific facilities, including high-energy physics sites. This new investment perfectly highlights the union of AI and particle physics, and I’m sensing this marriage will stand the test of time. At Fermilab, there are nine particle accelerators on site, each of which has countless subsystems with data output that needs to be monitored in real time to make sure the many experimental collaborations receive the data necessary to answer the biggest questions of our generation.
As Fermilab moves towards developing the Proton Improvement Plan II, a concerted effort to consolidate the monitoring of hundreds of thousands of subsystems, increase accelerator run time, and improve the quality of each particle accelerator is under way – with AI being the powerhouse that will turn all this into a reality. “Different accelerator systems perform different functions that we want to track all on one system, ideally,” says Fermilab engineer Bill Pellico. “A system that can learn on its own would untangle the web and give operators information they can use to catch failures before they occur.”
Not only will AI have the ability to improve data collection and processing, but these algorithms could also be used to alert accelerator operators of potential issues before a system failure, thereby increasing beam up-time. Aside from improved particle-accelerator monitoring, AI will become critical in storing and analysing data from the next wave of particle detectors, such as the Deep Underground Neutrino Experiment, which will generate more than 30 petabytes of data per year, equivalent to 300 years of HD movies. Those figures aren’t even taking into account the massive data surge that would arise from, say, a supernova event, should one be captured. Over the coming decades, AI algorithms will undoubtedly be taking a leading role in data processing and analysis across particle physics.
Researchers in Switzerland have made a popular materials characterization technique accessible to ultrafast X-ray laser light for the first time. The new approach allowed the team to film atomic processes that play out in mere femtoseconds, and it could make the technique – known as transient grating spectroscopy – an important complement to other methods such as inelastic neutron and X-ray scattering for imaging materials at atomic-scale resolution.
Transient grating spectroscopy uses two interfering laser beams instead of just a single beam to characterize materials. In this non-destructive technique, researchers use these beams to create a temporary interference pattern (the transient grating) that repeats at regular intervals, known as Talbot planes. The characteristics of the sample can be probed by placing it in one of these planes.
The distance between the stripes in the interference pattern depends on the wavelength of light pulses used to create the pattern. For wavelengths spanning the visible to the ultraviolet part of the electromagnetic spectrum, this distance is on the order of hundreds of nanometres – meaning that if the sample being imaged contains any features smaller than this, they will not be visible. Using radiation with shorter wavelengths, such as X-rays, can improve the technique’s resolution. However, crossing and positioning two X-ray beams well enough to generate a grating with nanometre stripe size has proved challenging.
Very hard-X-rays from the SwissFEL
Together with colleagues in the US and Italy, researchers led by Cristian Svetina of the Paul Scherer Institute (PSI) and Jérémy Rouxel of the EPFL overcame these difficulties to make a transient grating from very hard-X-ray beams. The radiation they used was produced at the SwissFEL facility and has an energy of 7.1 keV, which corresponds to a wavelength of 0.17 nm – the diameter of a medium-sized atom. This technical feat allowed the researchers to characterize a sample of bismuth germanate with a resolution down to individual atoms, at ultrashort exposure times of fractions of femtoseconds (10-15 s).
Talbot planes
The new approach involves first sending ultrashort X-ray pulses onto a transmission phase grating, or mask. This comb-like structure is made of diamond, which is robust to high-energy X-rays, and was specially fabricated by PSI team member Christian David. The “teeth” of the comb are spaced 2 microns apart and break up the X-ray beam into finer, partial, beams that interfere behind the diamond grating to create the transient grating diffraction pattern. This pattern is a faithful image of the diamond grating and is repeated at each Talbot plane.
“If we place a sample in one of these Talbot planes, some atoms within the sample become excited – just as if the sample were sitting at the location of the diamond grating itself,” Svetina explains. Crucially, only the atoms exposed to the X-rays in this periodic modulation are excited, and Svetina notes that this is the technique’s main attraction, as it allows the team to selectively excite areas of interest in a sample.
No unwanted background signals
Another big advantage of the method is that does not produce any unwanted background signals. “If the atoms are excited, you see a signal; if they are not excited, you see nothing,” Svetina explains. This is extremely valuable when measuring samples that emit only weak signals.
Members of the team note that characterizing materials on the atomic scale is becoming important for microchip manufacturers as they seek to study and improve nano-sized features on their chips. Studies like this one, which appears in Nature Photonics, will also make it possible to glean fresh information about various quantum phenomena, including heat transport in semiconducting materials and the processes involved when individual computer bits are magnetized.
The self-assembly of short peptides has been recently established as an easy route to various macromolecular nanostructures with promising applications in tissue engineering, biomedical devices and drug delivery. Some of these nanostructures may be used as degradable scaffolds for the fabrication of metal nanowires. Self-assembled peptides can form an array of nanostructures including nanofibrils, nanotubes, nanospheres and vesicles dependent on the constituent peptide or environmental conditions during assembly. They also display a range of nanomechanical and strong piezoelectric properties. Unveiling their molecular structure and mechanism of peptide self-assembly is crucial in understanding the precise molecular routes for medical conditions associated with protein misfolding such as Alzheimer’s disease.
Peptide nanotubes and similar structures are ideal objects for correlative AFM imaging – an approach to combine both molecular resolution and nanomechanical/nanoelectrical data within the same experiment. This approach could lead to the goal of correlating molecular structure of a peptide nanotube with its functional properties for the first time.
This webinar, presented by Vladimir Korolkov, will focus on the practical aspects of acquiring high-resolution and nanomechanical data with both PinPoint and True Non-Contact modes on a range of peptide nanotubes.
Vladimir Korolkov received his PhD in chemistry from Moscow University in 2008. Then, he moved to the University of Heidelberg, Germany, and specialized in X-ray photoelectron spectroscopy of thin films, followed by the position at the University of Nottingham, UK, where he discovered his passion for scanning probe microscopy (SPM), and became a strong advocate of SPM techniques to unlock structure and properties at nanoscale. He pioneered the use of higher eigenmodes of standard cantilevers to routinely achieve resolution that was previously thought to be exclusively limited to STM and UHV-STM. Vladimir has currently published more than 40 scientific papers, including several in Nature. Though his expertise now contributes to industrial development of SPM technology as of 2018, his work continues to inspire and influence academic ventures in this field.