A fleet of tiny spacecraft could prove the existence of a primordial black hole in the outer solar system – according to two independent proposals. The primordial black hole could be playing the gravitational role of “Planet Nine”, which is a hypothetical world that could explain the unusual orbits of certain Kuiper belt objects (KBOs) in the outer Solar System.
Orbits of these KBOs suggest that a body a few times more massive than the Earth currently resides about 500 AU from the Sun towards the constellation Orion. Searches for Planet Nine have come up empty – but this is no surprise because at that distance, even a large and reflective planet would be barely detectable with the kind of wide-field-of-view telescopes used for large-area surveys.
This lack of detection has led some to speculate that Planet Nine is not a planet but a black hole smaller than your fist. Though improbable, this was suggested in 2019 by Jakub Scholtz at the UK’s Durham University and James Unwin at the University of Chicago in the US. They argue that gravitational field that perturbs those KBOs’ orbits could originate from a primordial black hole captured by the Sun billions of years ago.
While such an object would be impossible to spot with telescopes, it might, says Edward Witten of Princeton University in the US, be revealed by a more aggressive search. In a paper posted on the arXiv preprint server, he suggests that a few-Earth-mass black hole could be detected by launching a fleet of hundreds or thousands of lightweight probes towards the object.
Earthbound laser propulsion
His proposal is a more modest version of the Breakthrough Starshot project, which aims to send ultralight (about 1 g) probes on a 20-year journey to the nearby star Alpha Centauri using an earthbound laser array to boost the spacecraft to 20% of the speed of light (0.2c). Using a similar system, Witten reckons a 10-year journey to 500 AU could be achieved at 0.001c with much larger spacecraft (about 100 g) – necessitating a less daunting feat of miniaturization. This is still 20 times the speed of NASA’s New Horizons Pluto probe.
By scattering a host of such probes in the general direction of the hypothetical black hole, a lucky few might pass within tens of AU of the object, accelerating slightly as they did so. If the probes send regular, timed signals back to Earth, the gravitational field of the black hole would cause a lengthening of the interval between pulses.
Witten calculates that to detect the black hole using this scheme, the probes’ timing measurements would need to be accurate to about 10-5 s over the course of a year. This is well within the abilities of existing atomic clocks, but it is hard to imagine how such devices could be squeezed into 100 g spacecraft.
“It is far from clear that this approach is practical, or that it is the best way even if it is practical,” admits Witten.
Transverse deflections
In response to Witten’s proposal, Scott Lawrence and Zeeve Rogoszinski, at the University of Maryland in the US developed an alternative approach, which they have described on arXiv. It does away with the need for onboard timing systems and instead relies on detecting transverse deflections of the probes’ trajectories caused by the black hole.
Whereas the accelerations in Witten’s scheme act only while the probes are near to the black hole, sideways displacements like those considered by Lawrence and Rogoszinski are permanent and build up over time. A 1000 km displacement would take several years to accumulate and at a distance of 500 AU, the duo calculate it would be detectable from Earth using very-long-baseline interferometry at high radio frequencies. Although this approach sidesteps the need for spaceborne atomic clocks, getting the probes to transmit or even just reflect such signals would be a significant challenge.
And still it might all be for nothing. In yet another contribution on arXiv, Thiem Hoang, at the Korea Astronomy and Space Science Institute and Abraham Loeb at Harvard University in the US, point out that both proposals treat the spacecraft as being subject only to gravity. In reality, drag and electromagnetic forces from the uneven interstellar medium would also perturb the probes’ trajectories, overwhelming the signal from the black hole.
Astronomer Mike Brown, at Caltech in the US, who with Konstantin Batygin predicted the existence of Planet Nine from the orbits of KBOs, finds the proposals interesting but ultimately unnecessary.
“I generally love these ideas, but there is zero reason to think that Planet Nine is a black hole,” says Brown. “We’re still looking hard. If we don’t find Planet Nine in any of the dedicated searches, I suspect it will turn up pretty quickly in LSST [the Large Synoptic Survey Telescope, now called the Vera C Rubin Observatory], though now I don’t know for sure when that will happen.”
The (petro-)chemical industry applies energy- intense thermal/catalytic processes to convert fossil fuels into different intermediates, bulk chemicals and fuels, while emitting a sizeable fraction of the anthropogenic CO2 emissions associated with climate change. In contrast, electroreduction of CO2 to intermediates such as CO, ethylene and ethanol not only uses some of the otherwise emitted CO2 as the feed, this process also emits drastically less CO2 than the traditional chemical processes.
This presentation will summarise the state of the art in CO2 electrocatalysis, and how it is affected by electrolyte composition, pH, and cell design. It will also explore the techno-economic and life-cycle prospects of CO2 electroreduction technology. Analysis show that addressing the significant energy requirements of the anode (oxygen evolution reaction) and the lack of catalyst and electrode durability of catalysts and electrodes are key to achieving economic feasibility and carbon neutrality. We have shown that a co-conversion approach that involves oxidation of organic substrates such as glycerol (a waste product of biofuel production) on the anode (thus replacing the oxygen evolution reaction) drastically enhances the prospects of CO2 electroreduction technology becoming a key component to a future carbon-neutral chemical industry.
This webinar presented by Paul Kenis will discuss:
Summary of the status of CO2.
Techno-economic and life-cycle analysis of CO2 electrolysis to identify remaining hurdles.
The prospects of CO2 electrolysis technology contributing to a future sustainable chemical industry.
Paul J A Kenis is the Elio E Tarika endowed chair and a professor of chemical and biomolecular Engineering at the University of Illinois at Urbana-Champaign, and an investigator of the International Institute for Carbon-Neutral Energy Research between Kyushu University in Japan and UIUC.
Kenis, a native of the Netherlands, received his BS in chemistry from Nijmegen Radboud University, where he worked on model systems for metalloproteins with Roeland Nolte, and his PhD in chemical engineering at the University of Twente, working with David Reinhoudt on films for nonlinear optical applications. As a postdoc with George Whitesides at Harvard, he explored the then emerging area of microfluidics.
At Illinois, Kenis develops microchemical systems with a range of applications: fuel cells; radiolabeling of biomolecules; protein/pharmaceutical crystallisation; and platforms for cell biology studies. His recent efforts on CO2 electroreduction pursue suitable catalysts, electrodes, electrolyser designs, determining suitable operation conditions, and performing techno-economic analysis as a guide towards more energy-efficient systems.
Kenis, has authored more than 200 publications and 14 patents. He was elected a fellow of the ECS in 2019, and has previously been recognised with a 3M young faculty award, a NSF CAREER award, a Xerox award, and best paper awards from AIChE and SEBM. He is also a co-author of reports on the prospects of CO2 utilisation at scale issued by the National Academies, as well as the global Mission Innovation consortium.
As we move forward, batteries with high-energy density and a long life at an affordable cost are needed for electrification of the transportation sector and an efficient utilisation of renewable-energy sources.
This webinar will first highlight the development of oxide cathodes for lithium-ion batteries over the years. Then, it will focus on the challenges and prospects of layered oxide cathodes with high nickel content and low or no cobalt content for lithium-ion batteries. Optimised synthesis and advanced characterisation methodologies to overcome the challenges will be presented.
This webinar presented by Arumugam Manthiram will discuss:
Recognising the fundamental science behind the development of high-energy density cathodes for lithium-ion batteries in the 1980s.
Understanding the richness and complexity of layered oxide cathodes for lithium-ion batteries.
Exposure to a perspective on high-energy, long-life, safe lithium-ion batteries as we march forward.
Arumugam Manthiram is currently the Cockrell Family Regents chair in engineering and director of the Texas Materials Institute and the Materials Science and Engineering Program at the University of Texas at Austin (UT-Austin). He received his PhD in chemistry from the Indian Institute of Technology Madras in 1981. After working as a postdoctoral researcher at the University of Oxford and at UT-Austin with 2019 Chemistry Nobel Laureate John B Goodenough, he became a faculty member in the Department of Mechanical Engineering at UT-Austin in 1991.
Manthiram’s research is focused on batteries and fuel cells. He has authored more than 770 journal articles with 59,000 citations and an h-index of 122. He has provided research training to more than 250 students and postdoctoral fellows, including the graduation of 60 PhD students and 26 MS students.
Manthiram is a fellow of the Materials Research Society, The Electrochemical Society, American Ceramic Society, Royal Society of Chemistry, American Association for the Advancement of Science and World Academy of Materials and Manufacturing Engineering. He received the university-wide (one per year) Outstanding Graduate Teaching Award in 2012; Battery Division Research Award from The Electrochemical Society in 2014; Distinguished Alumnus Award of the Indian Institute of Technology Madras in 2015; Billy and Claude R Hocott Distinguished Centennial Engineering Research Award in 2016; and Da Vinci Award in 2017. He is an elected member of the World Academy of Ceramics. He is a Web of Science highly cited researcher in 2017 and 2018. He served as the chair of the ECS Battery Division from 2010–2012. He founded the ECS UT Austin Student Chapter in 2006 and continues to serve as the faculty advisor.
Sunlight is an inexhaustible resource. Harnessing its power could drive a circular economy to generate energy, reduce carbon emissions, reduce materials waste, and eventually to generate the negative carbon emissions needed to bring our warming planet back into balance.
This webinar will explore opportunities in high-efficiency photovoltaics and artificial photosynthesis, and in particular, the generation of chemical fuels from sunlight by reduction of carbon dioxide.
This webinar presented by Harry Atwater will discuss:
New perspectives on future opportunities for photovoltaics.
An outlook for synthesis of hydrogen by photoelectrochemical water splitting.
Survey of directions for renewable fuels from the reduction of carbon dioxide.
Harry Atwater is the Howard Hughes professor of applied physics and materials science at the California Institute of Technology. Atwater’s scientific interests have two themes: light-matter interactions in materials and solar-energy conversion. Atwater was an early pioneer in nanophotonics and plasmonics; he gave the name to the field of plasmonics in 2001. He has created new high-efficiency solar-cell designs and has pioneered principles for light management in solar cells. He currently serves as director of the Joint Center for Artificial Photosynthesis, a Department of Energy (DOE) hub.
Atwater is a member of the US National Academy of Engineering and is also a Fellow of the APS, MRS, SPIE and the National Academy of Inventors. He is the founding editor-in-chief for the journal ACS Photonics and is an associate editor for the IEEE Journal of Photovoltaics. In 2006 he co-founded the Gordon Research Conference on Plasmonics, which he served as chair in 2008. He was also the founding director of the Resnick Sustainability Institute at Caltech, and strategic director for the QESST photovoltaics NSF Engineering Research Center. Atwater was the co-founder of Alta Devices, whose technology holds the current world records for one Sun single junction solar cell efficiency and module efficiency. He also serves as chair of the LightSail Committee for the Breakthrough Starshot programme.
Atwater has been honoured by awards, including: Clarivate Highly Cited Researcher (2014–2020); IEEE William R Cherry Award (2019); Kavli Innovations in Chemistry Lecture Award, American Chemical Society (2018); APS David Adler Lectureship for Advances in Materials Physics (2016); Julius Springer Prize in Applied Physics (2014); Fellowship from the Royal Netherlands Academy of Arts and Sciences (2013); ENI Prize for Renewable and Nonconventional Energy (2012); SPIE Green Photonics Award (2012); MRS Kavli Lecturer in Nanoscience (2010); and the Popular Mechanics Breakthrough Award (2010).
He served as president of the Material Research Society in 2000 and as trustee of the Gordon Research Conferences. Atwater received his BS, MS and PhD from the Massachusetts Institute of Technology respectively in 1981, 1983 and 1987. He held the IBM Postdoctoral Fellowship at Harvard University from 1987–1988 and has been a member of the Caltech faculty since 1988, where he teaches graduate-level applied physics classes in nanophotonics, solid-state physics and device physics.
As the world’s appetite for data transmission grows, established ways of sending multiple simultaneous, independent signals down a single optical fibre – a process known as multiplexing – are falling behind. This week, US researchers report progress towards an alternative multiplexing method that could vastly increase the capacity of existing fibre-optic networks. Their technique relies on controlling the orbital angular momentum (OAM) of light using a chip-based microlaser. In a separate paper, they also demonstrate, for the first time, that they can detect the OAM of this “twisted” light electronically. The two papers together mark a significant step towards OAM multiplexing in fibre-optic communications and could also have implications for quantum communication.
Photons can carry two types of angular momentum. The first is spin angular momentum (SAM), which arises from the rotation of the polarization of the electric and magnetic fields of light as the wave propagates. Separate signals can be encoded in these polarization states, and while such “polarization division multiplexing” faces technical issues, it has found niche commercial use and several companies are developing it further. However, since photons have only two orthogonal polarization states, this type of multiplexing can, at best, merely double an optical fibre’s capacity.
The other type of angular momentum, OAM, arises when the wavefronts themselves curl around the axis of propagation like pasta spirals. OAM is quantized – the wavefront must appear identical after each full wavelength — but there is no limit to how large it can be. Better still, each OAM state is orthogonal to the others. In principle, this means that every optical fibre could transmit an infinite number of signals at each wavelength with no interference.
That’s the theory. In practice, states of light with OAM values of up to 100 have been produced, but controlling them requires physically manipulating optical components in a way that would be impractical in a working data transmission system. Several research groups have therefore developed ways to modulate the OAM of laser light before it is emitted. However, this approach faces limitations, says optical engineer Liang Feng of the University of Pennsylvania, US. “You need an external laser to feed the light in,” he explains.
Worse, the OAM of light is undetectable with a traditional photodetector. “In light with OAM, all the information is in the phase of the waves,” says Ritesh Agarwal, an optical engineer and Feng’s colleague at Pennsylvania. “All detectors are basically counting the number of photons impinging on the material at that point and producing a photocurrent based on that. The phase information is gone.”
Momentum control
The Pennsylvania researchers and colleagues at other institutions have now published back-to-back papers in Science presenting ways to overcome these obstacles. The first paper builds on work by Liang Feng’s group at the University of Buffalo, US, in 2016 in which he and members of his lab isolated a single chiral (clockwise or counter-clockwise) mode in a circular micrometre-scale indium gallium arsenide phosphide laser cavity. This advance meant that the laser’s output light travelled in only one direction and was emitted with a precisely defined OAM.
In the latest work, the researchers show how to switch a similar laser between different OAM modes. Adding two microscopic “control arms” around the cavity allowed them to control the SAM of the photons, and thereby to select the chiral mode, which is locked with the SAM. Modifying the cavity itself with a set of “gear teeth” allowed them to transform SAM to OAM. They could therefore increase the light’s OAM further by injecting additional SAM from the control arms and utilizing the requirement that total angular momentum (the sum of OAM and SAM) be conserved.
The result is a micrometre-scale laser that can dynamically switch between high-purity OAM states anywhere from +2 to -2 – potentially in picoseconds – without altering the output wavelength from a telecom-friendly 1493 nm. For these experiments, the researchers pumped the microlaser with a 1064 nm laser, but Feng, the work’s senior author, says this should not be necessary. “For practical applications, we can in principle change the optical pumping to electrical pumping,” he says.
Non-local detection
In the second paper, Agarwal and colleagues identify an “orbital photogalvanic effect” by which light can transfer OAM and energy simultaneously to electrons. Crucially, Agarwal explains that detecting the OAM has to be non-local – meaning that it can be made only by comparing values at several different locations. “In local detection, you measure a corresponding photocurrent based on the intensity at that point,” he says. “In light with OAM, all the relevant information is in phase because the light is swirling around.” This information cannot be detected at any single point, Agarwal says, but it is contained in the electric field gradient, and can therefore produce a photocurrent.
Agarwal and colleagues therefore designed and fabricated a detector that uses U-shaped electrodes made from tungsten ditelluride – a special class of material called a Weyl semimetal – to pick up this photocurrent. “We have to come up with very interesting device geometries to extract the information about the phase and ensure that other things we don’t want get cancelled out,” Agarwal explains.
The researchers focused a laser beam of constant frequency and intensity on the centre of their electrode setup and varied its OAM between +4 and -4. The current their detector measured varied in discrete steps, matching their theoretical predictions. Agarwal predicts that if the detector were cooled to superconducting temperatures, it could be used to detect single photons – a capability with significant implications for quantum communication and quantum computing protocols involving “qudits”, or photons with multiple possible states beyond 0 and 1.
Seminal achievement
Miles Padgett, a physicist at the University of Glasgow, UK, who specializes in OAM, is impressed by both papers, and believes the second may prove seminal. “The first paper represents the state of the art in solid state laser generation of these vortex beams – no question about that – but it builds on what’s gone before,” he says. “As far as I’m aware, the ability to detect [the OAM quantum number] based on the nature of the photocurrent is – well, it’s the first time anybody’s been able to do that.”
Alan Willner, an electrical engineer at the University of Southern California, US, who made one of the first demonstrations of OAM multiplexing back in 2012, concurs. “We were building things with big, expensive devices on optical tables, and to a large extent we still are,” he says. “The idea that one could, in principle, build a future system in a cost-effective, reliable, high-performance way requires these types of building blocks. I consider this to be a wonderful step forward.”
Padgett and Willner would now like to see the researchers combine the two technologies to see whether their detector can pick up the OAM variations of their microlaser. “I’d love to see a transceiver, where you have the transmitter and receiver integrated together,” Willner says.
Imagine a world where people travel as they wish. They shake hands when they make new acquaintances, embrace when they greet close friends and elderly relatives. They do not bother to laboriously disinfect their work surfaces, or wash their hands once they have dealt with the post. They go shopping as they please and find no shortage of provisions. They work in offices, laboratories, shops, restaurants and building sites. They conduct meetings in person, and think nothing of it when they jet off to their favourite holiday destination. They do all this because a COVID-19 vaccine has been developed, rolled out and administered to the entire populace, making all the chaos of 2020 a distant memory. Everything is back to normal.
This is the ending to the coronavirus pandemic we are all hoping for, and, give or take some of the details, there is no reason why it is not possible. But even in this optimistic scenario, there is a deep fear among scientists and policy makers: what happens next time? For if there is one lesson that COVID-19 has taught us, it is that our modern lifestyles are fatally ill-suited to the emergence of novel viruses – and novel viruses there will always be. Any drugs and vaccines we develop for COVID-19 will be ineffectual against the next viral pandemic, which may well consist of a different family of virus altogether. Indeed, unless anything in our approach to pandemics changes, the next one will entail another psychologically and economically crippling lockdown while scientists find a cure – however long that takes.
Yet according to one scientist, there is something we can do differently next time. Charlie Ironside of Curtin University in Perth, Australia, is not a virologist or an epidemiologist but a physicist – one who has spent 30 years specializing in semiconductor optoelectronics. His solution: far-ultraviolet light-emitting diodes (far-UV LEDs).
A narrow range of far-UV wavelengths seems to be safe for humans, while being lethal for viruses. Sterilization could become easy, routine and effective
To avoid any misunderstanding, UV light is, on the whole, incredibly dangerous, and people should never seek exposure to it. However, there is emerging evidence that a narrow range of far-UV wavelengths is safe for humans, while being lethal for viruses. If LEDs could be mass-manufactured with this sweet-spot in UV emission, explains Ironside, they could be integrated into everyday lighting and consumer technology for pandemic control. Sterilization could become easy, routine and effective, he says, preventing new infections while allowing many aspects of ordinary life to continue. “It could flatten the curve of new infections without so much social distancing,” he adds. Lockdowns may not be necessary.
Ironside is referring to his proposal as a “call to arms” for LED researchers and the semiconductor industry as a whole. But is it realistic?
Antique weaponry
For more than a century, UV light – which consists of photons of wavelength 200–400 nm – has been known to kill bacteria and viruses. As a result, it is already in our arsenal against COVID-19 – or more accurately SARS-CoV-2, the novel coronavirus from which the current severe respiratory disease emerges. UV floodlights are installed in hospitals to sterilize the air and horizontal surfaces, or inside trays to sterilize medical instruments. Handheld UV torches sterilize where floodlights cannot reach. In China, buses are even parked at night in UV-illuminated depots. Where UV facilities are not already installed, robotic trolleys carrying UV lamps are sent into rooms via remote control.
While effective, there are two major disadvantages with these technologies. The first is that the UV light is usually emitted by fluorescent tubes, which are big, fragile and unwieldy for all but specialist applications. The second, bigger disadvantage is the effect of UV radiation on humans.
Not safe for humans UV light is already used for sterilization – for example in hospitals (left) and public transport (right). But currently these processes cannot safely be performed when humans are present, limiting their effectiveness in a situation such as a viral pandemic. (Courtesy: Shutterstock/Nor Gal; Sputnik/Science Photo Library)
We are all familiar with the first two bands of UV – UVA (315–400 nm) and UVB (280–315 nm) – as these are components of sunlight that filter through our atmosphere. Both cause sunburn – UVB more so – and, in the worst cases, skin cancer. But we rarely encounter UVC (200–280 nm), because it is absorbed by the Earth’s ozone layer. Not only does UVC cause terrible sunburn, it is also supremely effective at destroying DNA, making human exposure to it highly dangerous. Unfortunately, the wavelength emitted by UV fluorescent tubes is usually around 250 nm – right in the middle of the UVC band. For that reason, UVC germicidal lamps cannot be operated with anyone in their firing line, a fact that greatly limits their application in times of pandemics; after all, the risk of infection is greatest when people are living or working in close quarters, such as hospitals. In April the International Ultraviolet Association and RadTech North America – two educational and advocacy organizations consisting of UV equipment vendors, scientists, engineers, consultants and health workers – issued a joint statement to remind the public that there is no accepted safe means of exposing the human body to UV to kill viruses (see analysis box below).
But not all wavelengths of UVC are as damaging as others, as a group of researchers led by physicist David Brenner at Columbia University in New York, US, showed in 2017. Their research relied on an excimer lamp – a type of light tube containing molecules, or excimers, that can briefly exist in an excited electronic state before returning to their ground state, and in doing so emit UV radiation at various wavelengths in the UVC band depending on the molecules used. On exposing mice to 222 nm, far-UVC light from a krypton-chlorine excimer lamp, Brenner and colleagues found no evidence of skin damage, even though they found that the same light was effective at killing the superbug MRSA (Radiat. Res.187 493).
The result was corroborated a year later by Kouji Narita at the Hirosaki University Graduate School of Medicine in Japan and colleagues. This team also confirmed that the 254 nm-wavelength emission of a conventional germicidal lamp did induce sunburn-like skin damage (PLOS One13 e0201259). That same year, Brenner and colleagues found that 222 nm light is able to destroy airborne viruses as well. In their test, with an exposure of just 2 mJ/cm2, the far-UVC radiation safely inactivated more than 95% of airborne H1N1 influenza, the virus behind the 2009 swine flu pandemic (Sci. Rep.8 2752). There is even evidence that far-UVC light is safe for the eyes: last year, Sachiko Kaidzu of Shimane University in Izumo, Japan, found no damage to the corneas of rats exposed to 222 nm electromagnetic radiation (Free Radic. Res.53 611).
The reason for the lack of skin damage from far-UVC light, according to Brenner, is simply down to the range of absorption in biological materials (see box above). Being a shorter wavelength than other UVC light, far-UVC photons are barely able to penetrate the skin’s outermost layer of dead cells, which is often tens of microns thick. On the other hand, it can still easily penetrate bacteria and viruses, which are usually less than 1 μm thick. As Brenner said in a TED talk in late 2017, “I’m thrilled that we’ve now got a completely new weapon against superbugs” – and, he later noted, viruses. (Brenner did not respond to requests from Physics World for an interview.)
Ultraviolet: the invisible killer
A 32 nm difference Mouse skin after irradiation with 254 nm UV light (top) shows DNA damage (marked by arrow). Mouse skin irradiated at 222 nm (bottom) does not show these lesions. (CC BY 4.0/PLOS One)
Physicians suspected as far back as the late 19th century that there is a link between skin cancer and Sun exposure. However, it was only in around 1940 that scientists first realized that cell mutations studied in the lab went hand in hand with levels of ultraviolet (UV) absorption by DNA, and therefore that it was specifically UV radiation to watch out for. Much later they would discover, via modern DNA sequencing techniques, that those very same mutations are present in actual skin tumours, cementing the UV–cancer link.
But how does UV damage DNA in the first place? DNA is made up of four nitrogen-containing bases, one of which is thymine. When a thymine molecule absorbs a UV photon, one of its electrons is promoted to an unfilled orbital, making the molecule very reactive. In this instance, it can bond to another thymine molecule, forming a dimer. A special protein is able to repair such damage – so long as it is not too extensive. If it is extensive, most of the time the cell containing the DNA simply dies and you get sunburn; but sometimes the damaged DNA causes the cell to become cancerous, and grow and divide uncontrollably. This is the basis of a tumour.
To be absorbed by the electrons in thymine, however, UV radiation has to actually reach the DNA, and some wavelengths of UV stand more chance than others. That is because the 5–20 µm thick outer layer of “dead” skin, known as the stratum corneum, contains only proteins – no DNA-containing nuclei. The absorption spectra of proteins are well known. Below a wavelength of 250 nm, their absorption of UV light rises rapidly: it takes some 3 µm of biological tissue to reduce the intensity of 250 nm UV radiation by half, but just 0.3 µm for the same attenuation of 200 nm far-UVC. According to David Brenner at Columbia University, and colleagues, far-UVC is “drastically” attenuated before ever reaching the nucleus of a living cell, potentially making it safe for human exposure.
Big impact
Brenner’s work has gained a lot of attention, with articles about it appearing in Time, Newsweek, the Wall Street Journal and CBS News, among other outlets. It is easy to see why far-UVC light could radically improve our ability to deal with viruses, including those for which we have no cure. If 222 nm excimer lamps could be installed in or alongside existing light fittings, they could operate more or less continuously in public spaces such as hospitals, schools, train and bus stations, and airports – as well as on trains, buses and aeroplanes themselves – without any risk of harming people. Viruses such as SARS-CoV-2 spread in the air: if that air is irradiated, the viruses find it much harder to reach new hosts. Even without social distancing, people could minimize infections.
But all that will be possible only if the safety of 222 nm UV is proven beyond doubt. Peter Setlow, a molecular biologist at UConn Health in Farmington, Connecticut, US, is among those who would like to see longer-term studies of the effect of far-UVC on the skin, as the studies conducted so far have relied either on single doses or exposures over just a few hours. “The question is, exactly how much 222 nm UV penetrates the dead cells in the skin to get to those that are alive and working?” he says. “It seems the answer is certainly ‘not much’, but ‘much’ is not an absolute term. To do risk assessments, you would probably need to perform experimental tests on animals over a longer time, and also establish, for example, whether hospital gowns provide any protection against this UV light.”
Ironside agrees that the safety of far-UVC for humans needs to be comprehensively proven before it can be routinely used. But if its safety can be shown (and Ironside assumes it will), there will still be the problem that excimer lamps are unwieldy, making them suitable only for stationary fittings. They are also a legacy technology.
That’s because over the past decade or so, we have gradually seen incandescent and fluorescent lighting supplanted by that based on LEDs, which are cheaper, more efficient, more tunable, safer (because they are lower voltage) and longer lasting. This solid-state revolution has been brought about largely thanks to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura, who in the early 1990s developed the first blue LEDs, for which they were later awarded the 2014 Nobel Prize for Physics. The output of blue LEDs can be easily converted to white by adding a phosphorescent layer, making them convenient for all sorts of lighting applications, including the backlights of flat-screen monitors.
This is why Ironside sees LEDs as the most convenient source of far-UVC. “If we did produce a far-UVC LED that was proven to be safe [for humans] then I think it would make a big difference,” he says. “All you have to do is envisage what it would be like if there was an infection-control device available on every mobile phone now – the device could be used to sterilize surfaces and hands.”
“There will always be a period after a new pathogen has evolved, and before a vaccine is available, when the first line of defence is infection control,” he continues. “A far-UVC light LED will be a major component… It could also revolutionize personal-protection equipment for health workers.”
In principle, there is no reason why LEDs cannot be manufactured to emit at almost any wavelength, by adjusting the alloys of the semiconductors used
In principle, there is no reason why LEDs cannot be manufactured to emit at almost any wavelength, by adjusting the alloys of the semiconductors used. Gallium nitride (GaN), for example, which forms the basis of most commercial LEDs, has a band gap of about 3.4 eV, corresponding to a visible violet emission of wavelength 360 nm. The band gap of aluminium nitride (AlN), meanwhile, is about 6.4 eV, corresponding to a natural emission very deep in the UVC, at 210 nm. As a result, Al-GaN LEDs emit light at wavelengths somewhere in between, depending roughly on the ratio of aluminium to gallium.
Unfortunately, these lab devices have an efficiency of barely a few per cent – well below the 20–40% needed for practical use – which means that none of them has ever been commercialized. Despite LEDs becoming available at shorter and shorter wavelengths – with Nitride Semiconductors in Japan even offering one at 275 nm – most commercially available UV LEDs emit at UVA wavelengths of about 350 nm, where they find applications in the curing of adhesives and ink-jet printing.
Inside an LED
(Courtesy: iStock/KirVKV)
A light-emitting diode (LED) essentially consists of an “active” layer of semiconducting material sandwiched between negatively doped (n-type) and positively doped (p-type) semiconductors. When a voltage is applied to the junction, electrons from the n-type material move into the conduction band of the active layer, while holes from the p-type semiconductor are injected into the valence band. Light emission takes place when electrons at the bottom of the conduction band spontaneously recombine with holes in the top of the valence band in what are called quantum wells. The energy difference between the bands, known as the band gap, dictates the wavelength of the released photons.
Out of the darkness
Rob Harper, the GaN programme manager at Compound Semiconductor Centre (CSC) – a joint venture between the semiconductor wafer manufacturer IQE and Cardiff University in the UK – explains that one of the problems in reliably manufacturing efficient far-UVC LEDs is the doping of Al-GaN semiconductors with metals such as indium to make them slightly positive or “p-type”. When the dopants are added, they tend to leak into the light-emitting region of the LED, stifling light emission, he says. And when grown via the industry-standard epitaxy technique of metal organic chemical vapour deposition, the high aluminium content itself degrades the crystalline structure. “Higher than usual epitaxial growth temperatures [can be used to] alleviate this, but result in increased incorporation of unintended impurities in the active region,” he says. “[These] also quench light generation and result in very low efficiency.” Another potential far-UVC alloy, magnesium zinc oxide (Mg-ZnO), suffers similar problems, he adds.
Still, Harper understands why the challenge is so important. “The current COVID-19 outbreak has painfully illustrated the need for new approaches to cost-effective, rapidly deployed, large-area disinfection techniques such as UVC irradiation,” he says. “Any research approach that shows potential to realize new practical p-doping techniques merits investigation.”
Harper did not disclose whether CSC plans to work on far-UVC LEDs. However, one researcher who has responded to Ironside’s call to arms is Tony Kelly, a former colleague and a “commercial turned academic” researcher in applied optoelectronics at the University of Glasgow in the UK. Within two days of being contacted by Physics World about the topic, he had already investigated potential funding avenues and was in the process of seeking collaborators for a far-UVC LED research project. “Often Charlie is right about things, and I think he’s right about this,” he says.
Like Harper, Kelly can foresee problems in making efficient devices. The shift to such emission from where UV LEDs are currently, he says, “could be a lot harder than it looks”. Still, he has reasons to be positive. Many funding agencies are urgently seeking projects whose results could mitigate the effects of the COVID-19 pandemic, and he expects them to be fast-tracked. UK Research and Innovation, for example, currently has an open-ended call for proposals of any financial scale related to COVID-19 that could deliver results in 18 months. “As with everything, the potential impact is driven by current affairs,” says Kelly. He jokes that, ironically, the biggest delay could be getting back into his lab to build prototypes. Like almost every other academic institution, Glasgow University is currently [as of early May] closed to all but essential staff and researchers. Those conducting science relevant to COVID-19 fit into this category; nevertheless, the practicalities of running a clean room, introducing social-distancing measures, and deciding who should be allowed in it, are issues that Kelly will have to resolve with his university’s administrators.
Although the investment in manufacturing tools and production lines for new semiconductor devices can be eye-watering, often running into billions, Kelly thinks this scale of investment may not ultimately be necessary for Al-GaN LEDs, as GaN is already an established commercial material. Instead, it will be a matter of adapting the fabrication plants that already exist. “If we find a design that works in a year, we could start raising finance to get on with things,” Kelly says.
Meanwhile, Ironside himself is not shirking the challenge. Although most of his research to date has focused on near- to mid-infrared LEDs, he is hoping to obtain joint funding with an industrial partner to explore the far-UVC potential of Mg-ZnO LEDs. He believes that success will be due to innovative physics combined with manufacturing expertise.
Naturally, this success is not guaranteed. But with governments across the world spending billions to keep their economies afloat, the incentive to find ways to avoid future chaos is as much commercial as humanitarian, and Ironside wants as many of his fellow researchers as possible to get involved. “As soon as I heard about Brenner’s work on the far-UVC, I thought it was an idea that was really worth pursuing,” he says. “I think the community should be aware of it.”
Analysis: Despite bad press linked to Donald Trump, UV light could combat future pandemics
By Matin Durrani, editor-in-chief, Physics World
Many of Donald Trump’s supporters – and surely all his detractors – would agree that he’s said and Tweeted some pretty bizarre and controversial things during his time as US president. But even by his standards, Trump’s utterances in a press conference at the end of April were off the scale. Speaking to reporters in the White House, the 45th US president mused on whether ultraviolet (UV) light could tackle the spread of the virus behind the COVID-19 pandemic.
“Supposing we hit the body with a tremendous…ultraviolet or just very powerful light. And then supposing you brought the light inside the body, either through the skin or in some other way. Sounds interesting. The whole concept of the light, the way it kills the virus…that’s pretty powerful.” Unfortunately, while UV can kill viruses, certain frequencies are incredibly dangerous to humans. To make matters worse, Trump then wondered out loud if COVID-19 could be treated by injecting disinfectant into the body.
Trump is no scientist, despite once telling the Boston Globe that he shared the same “very good genetics” as his uncle John Trump, who was a physicist at the Massachusetts Institute of Technology for nearly five decades. Indeed, the president’s comments forced two US-based UV trade organizations to release a joint statement to remind the public that there is no accepted safe means of exposing the human body to UV to kill viruses. Dettol manufacturer Reckitt Benckiser also had to reiterate that “under no circumstance should our disinfectant products be administered into the human body”.
It’s long been known that UV kills viruses by damaging their DNA – indeed, UV light from fluorescent tubes is already used in hospitals to sterilize equipment and surfaces. Trouble is, the light from these tubes – 250 nm, right in the middle of the UVC band – is believed to damage cellular DNA and sometimes trigger cancer. People are therefore not permitted anywhere near UVC germicidal lamps without suitable protection.
However, there has been recent research (see above) suggesting that a small band of far-UV light (at roughly 220 nm) can damage viruses yet still be safe for humans. That raises the prospect of far-UV being used to kill viruses in hospitals, trains, shops and other places even where people are present. There’s a long way to go before this notion becomes reality. Apart from fully corroborating that far-UV is safe, we’d also need a simpler way to create this light than the large, unwieldy “excimer lamps” that are currently used.
That’s why some physicists are calling for research into far-UV LEDs, which would be much smaller and could turn everyone’s mobile phone into a virus-buster. It would be a shame if Trump’s ramblings on UV ended up discrediting a potentially promising avenue of physics-based research that could help us avoid another pandemic.
Models of disease spread inform governments on when and how to ease the measures currently in place to contain COVID-19. But physicist Susanna Manrubia, an expert in modelling biological phenomena at the Spanish National Centre for Biotechnology in Madrid, is alarmed by the precise predictions reported by some models. In response, she and her colleagues have highlighted the uncertainties in predicting the peak and end of the pandemic in a e-print currently under peer review.
“We are really worried about the limitations of modelling and thought media predictions were getting out of hand,” explained Manrubia.
The e-print is available on Cornell University’s arXiv server. In it, Manrubia and colleagues describe how they used a simple model to expose COVID-19 forecasting uncertainties, and demonstrate that uncertainty is intrinsic to a pandemic’s exponential growth pattern. They conclude that more realistic weather-like forecasts, transparent about uncertainties, should be used in portraying COVID-19 predictions to the public and governing authorities. “Only probabilistic prediction is feasible and reliable,” states Manrubia.
David Dowdy, a clinician who researches infectious disease epidemiology at Johns Hopkins University in the US, and Samir Bhatt, an expert in modelling infectious diseases from Imperial College London, were not involved in the study, but agree that presenting uncertainty is vital. “With COVID, I don’t feel like any model should be making projections more than a few weeks in advance,” says Dowdy. “There are too many unknown factors beyond that. And if you’re going to do it, you have to do it in a way that suggests the great uncertainty.”
Modelling disease spread
Many approaches used to estimate the future stages of infectious disease spread — and to quantify the impact of social distancing measures aimed at “flattening the curve” — are based on simple models. These models include limited mechanistic detail, simply simulating the exponential growth of an epidemic using a set of differential equations, with the population described as being in three distinct categories – susceptible, infectious or recovered (SIR). There are various iterations of these SIR models, incorporating different categories such as quarantined or infected but asymptomatic.
To highlight the limitations of such epidemic modelling, Manrubia and colleagues selected Spain’s COVID-19 epidemic as a case study and applied a variant of the SIR model called SCIR that includes a category describing the reversible confinement of susceptible individuals. The team used a Bayesian approach to probabilistically fit the data – training parameters (factors defining the outbreak such as infectivity) according to expected distributions. These priors were then filtered and fitted to the data.
The result was a posterior distribution of parameter values that accurately recovered the daily recorded data in Spain – in the period 28 February to 29 March.
Sensitive to small variations
“We could recover data of the past nicely but see a sensitivity to small variation in the parameters that causes a spread of trajectories,” says Manrubia. Many of these possibilities indicated a reduction in active COVID-19 cases – “flattening the curve” – but there was also bending towards continued exponential growth of Spain’s epidemic. “There, your prediction is somehow lost in the sense that, you can only say probabilistically speaking, whether there will be a peak in three days or not.”
Attempts to add precision into these models often involves including more categories, but Manrubia points out that this practice adds parameters, so multiplying potential for variation.
Bad data is often blamed for a prediction’s uncertainty, but Manrubia and colleagues argue that it’s not just about the data. They illustrate this by directly integrating the SCIR model to generate synthetic data, and then fitting the same model to this “perfect data set”. A spread in future trajectories was still observed, and the team explained this is due to intrinsic dynamics of model’s with exponential growth in the variables, such as an epidemic.
“Forecasting the future is very challenging. Even with perfect data, it doesn’t give information about who is going to get infected in the future,” says Bhatt. He points out that inherent unpredictability in predicting the future isn’t a new finding in itself, but acknowledges the value in highlighting uncertainties in the current crisis.
Dowdy is also in favour of this issue being broadcast to a wider audience. “[Uncertainty in forecasts] is an issue that is well-known in the scientific community, but not well-known in the lay community.”
Presenting the probabilities
Manrubia is frustrated by models that extrapolate from the single parameter value best fitting past data, so failing to show that many other predictions are possible. “We are trying to get more care from those doing models, and get the message up to the authorities because there is so much noise they need to be aware of.”
“It’s fine to be wrong (forecasting models are by definition going to be wrong because they are making assumptions), it’s the communication that is absolutely essential,” says Bhatt. “Some scientists need to be transparent about the limitations of models and what they tell us, and this paper nicely discusses that.”
Manrubia hopes this crisis will spur the global epidemiology community to reliably integrate data across the globe. “SIR models are really simple – they’ve been around for a century – come on. I’m certain that we can do better.”
Most metals expand when heated and contract when cooled. A few metals, however, do the opposite, exhibiting what’s known as negative thermal expansion (NTE). A team of researchers led by Ignace Jarrige and Daniel Mazzone of Brookhaven National Laboratory in the US has now found that in one such metal, yttrium-doped samarium sulphide (SmS), NTE is linked to a quantum many-body phenomenon called the Kondo effect. The work could make it possible to develop alloys in which positive and negative expansion cancel each other out, producing a composite material with a net-zero thermal expansion – a highly desirable trait for applications in aerospace and other areas of hi-tech manufacturing.
Even within the family of NTE materials, yttrium-doped SmS is an outlier, gradually expanding by up to 3% when cooled over a few hundred degrees. To better understand the mechanisms behind this “giant” NTE behaviour, Mazzone and Jarrige employed X-ray diffraction and spectroscopy to investigate the material’s electronic properties.
The researchers carried out the first experiments at the Pair Distribution Function (PDF) beamline at Brookhaven’s National Synchrotron Light Source (II) (NSLS-II). They placed their SmS sample inside a liquid-helium cooled cryostat in the beam of the synchrotron X-rays and measured how the X-rays scattered off the electron clouds around the atomic ions. By tracking how these X-rays scatter, they identified the locations of the atoms in the crystal structure and the spacings between them.
“Our results show that, as the temperature drops, the atoms of this material move farther apart, causing the entire material to expand by up to 3% in volume,” says Milinda Abeykoon, the lead scientist on the PDF beamline.
The second set of experiments took place at the SOLEIL synchrotron in France and the SPring-8 synchrotron in Japan. These measurements used X-ray absorption spectroscopy to track whether electrons move into or out of the outermost (valence) shell of the Sm atoms, which is just under half full, Jarrige explains. The researchers found that the electrons flowing through the SmS metal were indeed travelling into the valence shell, causing the entire material to expand as each atom’s electron cloud grew to accommodate additional incoming electrons.
Sm atoms act as tiny magnetic impurities
According to the researchers, this behaviour is explained by the Kondo effect, which describes how conduction electrons interact with magnetic impurities in a material. During such interactions, electrons align their spins in a way that opposes the spin of the larger magnetic particle, effectively “screening out” or cancelling its magnetism.
In SmS, the just-under-half-full valence shell of each Sm atom acts as a tiny magnetic impurity that points in a certain direction, explains Maxim Dzero, a theoretical physicist at Kent State University, who collaborated with Jarrige and Mazzone. Since SmS is a metal, it also contains mobile conducting electrons that can approach and cancel out the magnetic moment of the impurity. The electrons can then move into the valence shell, filling it up and leading to the material’s expansion.
Exploring other rare-earth based materials
Dzero’s calculations suggest that the amplitude of this NTE can be tuned by varying the amount of yttrium doping, although Mazzone notes that this hypothesis needs to be tested further. The researchers also predict that two other rare-earth metals, thulium and ytterbium, should exhibit Kondo effect-driven NTE, and Mazzone says it would be interesting to see whether the magnitude of the expansion is as large as in Sm. In the other rare-earth metals studied so far, the degree of negative thermal expansion is subtle at best, he adds.
The present research, which is detailed in Physical Review Letters, could be important for developing composite materials that have near-zero thermal expansion. Such materials would contain alloys that expand on one side and shrink on the other as they cool, keeping the overall size the same. Materials of this type could be used to make the metal parts in aeroplane wings, which routinely contain composites or alloys with opposite expansion properties to prevent dangerous shrinkage in the low temperatures found at high altitudes. Other applications include temperature-stable contacts in microelectronics devices undergoing thermal cycling and substrate materials for mirrors in telescopes and satellites.
The NTE coefficient that the Brookhaven and other groups have reported for yttrium-doped SmS is large enough to meet even the most demanding technology applications, but Jarrige admits that the high costs of Sm could limit its deployment.
“This is why it will be important to explore other rare-earth based materials to better understand how the Kondo effect can trigger dramatic macroscopic changes in materials,” he tells Physics World. “These studies could help us find a cheaper alternative to Y-doped SmS.”
A commercial optical surface tracking (OST) system can monitor a patient’s position with submillimetre accuracy during radiotherapy, according to tests conducted at Maastro Clinic in the Netherlands. With this system, non-coplanar single isocentre stereotactic radiosurgery (SRS) treatments of multiple brain metastases are feasible and safe.
Progress in linac-based treatments of brain metastases has led to the use of volumetric-modulated arc therapy (VMAT) techniques, preferably using a single isocentre to make dose delivery faster and more efficient. It is imperative, however, to reduce the GTV–PTV (gross tumour volume–planning target volume) margin to 1 mm to minimize or prevent radionecrosis and the development of debilitating side effects. And to preserve this 1 mm GTV–PTV margin, the patient needs to be monitored in real time to maintain accurate positioning.
OST systems for radiotherapy use advanced technologies to precisely monitor a patient’s external surface. In addition to reducing setup errors, OST can provide continuous intra-fractional motion surveillance during treatment, without the use of ionizing radiation.
The research team evaluated a three-camera Catalyst HD system, which uses LEDs to project three wavelengths of light onto the patient and a CCD camera to detect the reflected light. The system uses the reflected signals to generate a real‐time 3D surface of the patient, which is compared to a reference surface to verify setup. The system can be used for different skin tones, by employing individualized camera settings for gain and saturation.
For the study, lead author Ans Swinnen and colleagues used a TrueBeam STx linac equipped with a high-definition multileaf collimator and a six-degrees-of-freedom couch. To evaluate setup accuracy, they compared the isocentre shifts calculated by the OST system with the ones suggested after image verification with the on-board kilovoltage imaging system, at couch angles of 0° and 270°. Deviations between the isocentre shifts in rotational and translational directions were within 0.2° for both couch positions, and within 0.1 and 0.5 mm at 0° and 270°, respectively.
The researchers also performed film measurements at three depths in a Rando-Alderson phantom using a single isocentre non-coplanar VMAT plan containing four brain lesions. They report that dose deviations between the film-measured and treatment planning system-predicted doses in the centres of the four target lesions were –1.2%, –0.1%, 0.0% and –1.9%.
To verify that the OST system could accurately visualize a patient at various couch angles, the researchers used a mannequin training head in an open face mask. They subsequently tested the OST on seven volunteers, each of whom were monitored three times to represent three consecutive treatment fractions. They collected setup data to evaluate the accuracy and reproducibility of the OST system at couch rotation angles of 0°, 45°, 90°, 315° and 270°.
When the volunteers were tested, the couch rotational drift was impacted by their individual weights, as well as the position of the couch relative to the couch pedestal. The mean translational isocentre shifts for the seven volunteers were less than 0.6 mm. The largest isocentre displacements monitored by the OST system were obtained in the lateral and longitudinal directions of a couch positioned at 45°.
Because of this, the researchers recommend that a regular Winston-Lutz (WL) test (a procedure for verifying the linac isocentre) should be part of the SRS-specific linac quality assurance programme. They are in the process of developing a practical WL test-based phantom that integrates measurement of the isocentre congruence of imaging systems, radiation beam and couch rotation axis with submillimetre accuracy, and measurements of the isocentricity of the OST system.
Additional research will focus on testing similar OST systems for brain tumour patients treated with proton beam therapy on a Mevion S250i accelerator. For this purpose, the radiotherapy centre has installed a four-camera Catalyst HD system to monitor a patient on the robotic couch moving in and out of the in-room cone-beam CT scanner and towards the different treatment positions. A proton-compatible open-face mask has been manufactured and will be tested in the near future.
“As the accuracy in patient setup during the various couch movements may be even more important in brain treatments with proton therapy, we expect the potential of this intra-fraction monitoring system to be even higher,” the researchers write.
Collections of essays often have an uncertain status. In the absence of any overarching message, they stand or fall on the quality of the writing and the author’s ability to offer regular jolts of insight and entertainment. When (full disclosure) David Kaiser told me last summer that he was putting the finishing touches to his anthology – Quantum Legacies: Dispatches from an Uncertain World – mostly reworked from pieces published already, I had little doubt that it would meet that challenge.
Both a physicist and a historian of science, Kaiser has shown himself for years now to be an astute chronicler of the subject: able to explain arcane phenomena with a deft touch, as well as to weave the deeper context around the discoveries and developments of the discipline. What sets him apart from most popularizers is that he does not simply head for the flashiest topics but writes eloquently about the real-world business that engages most physicists: funding streams and industrial support, intellectual fashions and reputations, teaching and publishing.
One of the standout chapters might sound unpromising on paper: it concerns the changing nature of physics textbooks, specifically those teaching quantum mechanics. But within it is a metaphor for the entire discipline. In the 1920s and 1930s – while theoretical physics as a formal subject was still relatively young and quantum mechanics was nascent – there was a sense that students were learning a kind of artisanal craft, passed down through mentorship as “a great adventure in human understanding”, as J Robert Oppenheimer wistfully recollected. Quantum mechanics was then considered to demand engagement with philosophical, as much as mathematical, problems. In the 1930s, students at Caltech were expected to answer questions about quantum foundations that are still unresolved, and are a cause of furious arguments today: “What is [the] interpretation of ψ(x)? Discuss the nature of observation in quantum mechanics.”
But by the 1950s it was a different story. The textbooks had no time for such frippery; you had just to learn the techniques and apply them – famously, to shut up and calculate. “Where once-fabled teachers like Oppenheimer had relished talking through thorny conceptual challenges with small groups of students,” Kaiser writes, “instructors after the war – their intimate classrooms by then replaced by large lecture halls… – increasingly aimed to train quantum mechanics: skilled calculators of the atomic domain.” Richard Feynman’s famous books of lectures, while rightly revered for their clarity and panache, exemplify that no-nonsense attitude. Interpretational issues, Feynman wrote in the lecture notes to his graduate course on quantum mechanics, were “in the nature of philosophical questions” – and by implication, not important, or at least “not necessary for the further development of physics”.
And this utilitarianism, as Kaiser points out, echoed the almost factory-like over-production of physics graduates: a bubble inflated by fears of Soviet competition, and which burst around 1970 when funding plummeted. Stimulated by Science, the Endless Frontier (1945), Vannevar Bush’s famous manifesto for the value of basic science in supporting economic growth and national security, this post-war drive to train physicists who could “get the job done” relied largely on defence funding. It was fuelled by the notion that the Second World War had been “the physicists’ war” – a phrase coined for other reasons, but which became attached to the idea that victory had hinged on the Manhattan Project.
It is a breath of fresh air to see physics writing like this: lucid and friendly, sober and thoughtful
As far as quantum mechanics was concerned, Kaiser’s splendid 2011 book How the Hippies Saved Physics (Physics World Book of the Year 2012) recounts how a small group of counter-culture rebels in California during the 1970s revitalized interest in the philosophical foundations, revisiting the arguments of Bohr, Einstein, Schrödinger and Heisenberg over what the remarkably effective formalism actually said about reality. All the same, for a young researcher to take an interest in such questions was frowned upon until only the past decade or two. Now quantum information technologies are showing that they were not after all irrelevant to the practical concerns of engineers.
Kaiser is also good at teasing out how such sociological considerations have influenced the questions physicists ask and the reception of the answers they give. In one chapter he explains how the gulf between particle physics and cosmology in the 1970s hindered an appreciation of the link between the alternative gravitational theory of Robert Dicke and Carl Brans, and the Higgs field proposed by Peter Higgs – an idea that now suggests a role for the latter in cosmic inflation. “Contours of intellectual life can be reshaped by rapid changes in institutions and infrastructure,” says Kaiser, “ultimately shifting the boundaries of what young physicists come to find compelling or worth pursuing.” That can never be stressed enough: however obvious the questions might seem, they will have been socially selected and moulded.
It is a breath of fresh air to see physics writing like this: lucid and friendly, sober and thoughtful, and willing to trust the reader’s engagement and intelligence rather than demanding the former and underestimating the latter. It’s also the case – probably inevitable in such a collection – that Kaiser is not always writing for the same audience. The description of the symmetries of the Standard Model, for example, is superb popular science (“teeming collections of atoms, which are mostly empty space, their subatomic constituents acquiring heft from the symmetry-preserving whirl of a gluonic quantum dance”) – but speaks to a different demographic from the accounts of post-war physics training or the fate of the Superconducting Super Collider. The book is, too, not without a little of the overlap that often results from such patchwork assembly. Nonetheless, it is hard for me to imagine any physicist who wouldn’t enjoy the fine cloth from which it is cut, nor the pleasing effect it makes.