The influence of science and scientists on US government policy is being downgraded by the administration of US president Donald Trump – according to the physicist and former science adviser John Holdren. However, Holdren is hopeful that both the US Congress and private industry will reject some of Trump’s planned cuts to science funding.
Holdren served as presidential science adviser and head of the Office of Science and Technology Policy (OSTP) during Barack Obama’s presidency. He told Physics World that it was still unclear whether Trump will appoint a science adviser and senior OSTP staff, who must be approved by the Senate. “I think it would be a serious error if the president does not create a very serious science and technology capability in the White House.”
He believes that the Silicon Valley entrepreneur Peter Thiel is currently advising Trump on science and technology. “It’s pretty clear he knows a lot more about technology than about science. There’s a rumour that the OSTP may be more technology-heavy than science-heavy.”
Pretty lonely
While OSTP leadership appointments are on hold, Holdren says that “one member of the Trump ‘landing team’, Michael Kratsios, has been working very hard to understand the role of science adviser”. “I’m told that he has a good idea of the role. But will he have the ear of the Trump administration? He may be pretty lonely.”
Holdren says that one week before the presidential inauguration in January a member of Trump’s campaign team visited the OSTP. “We spent an hour talking about the functions of the OSTP and the science adviser. We had prepared a very detailed transition book with documentation of all the OSTP’s responsibilities, which we handed over. That was the last we heard.”
Major setback
Holdren fears that the influence of scientists at the White House is waning. He says that Trump’s proposed budget in March “shows no sign of any significant input from anyone who understands science’s role in making recommendations on government policy for government agencies’ science and technology budgets”. Holdren hopes that the budget will be rejected by the US Congress, and calls the plan “a major setback for research, climate science, energy research – just devastating cuts for government support in domains where companies aren’t involved”.
Holdren points out that the proposed cuts will affect programmes that have direct benefits to American society – citing the $6bn drop in the budget of the National Institutes of Health and proposed cuts to NASA’s Earth-observation programmes.
However, Holdren believes that initiatives in science education and the Obama climate action plan both have strong industry support. Although under threat, he says these programmes are likely to survive. “The majority of technology companies accept that climate change is real and we need to do something about it,” he says, adding that the US must ensure that it remains competitive in the development of climate-friendly technologies.
An interview with Holdren will appear in the May issue of Physics World.
Exoplanet searchers named among 100 Most Influential
Far-reaching: Natalie Batalha is not just influencing space exploration. (Courtesy: NASA)
Three physicists have made TIME magazine’s 100 Most Influential People for 2017. In its 14th year, the list highlights those individuals that make the most impact worldwide rather than the most popular or famous. Listed together within the pioneers category (rather than individually like most others), the three physicists honoured by TIME are astronomers searching for exoplanets. Natalie Batalha is the lead scientist for NASA’s Kepler space telescope and is the first woman at NASA to make the list. Her work includes the mission’s first confirmation of a rocky planet outside the solar system and she has identified more than 5100 possible exoplanets over her career. Also honoured is Guillem Anglada-Escudé of the Queen Mary University of London in the UK who discovered the exoplanet orbiting our closest neighbouring star, Proxima Centauri, and Michaël Gillon of the University of Liège in Belgium who announced in February the discovery of Trappist-1. Other scientists on this year’s list include artificial intelligence researcher Demis Hassabis and Guus Velders, an atmospheric chemist.
Ringdown could reveal black-hole hair
Hair-raising: ringdown could reveal black-hole baldness. Data from the two LIGO detectors showing gravitational waves from the first-ever detection of a binary black hole merger. The lower-amplitude signals on the right correspond to ringdown. (Courtesy: LIGO)
A careful study of data from the LIGO gravitational wave detectors could reveal whether black holes have “hair” – physical properties other than mass, angular momentum and electrical charge. Einstein’s general theory of relativity says that black holes have no hair – they are “bald“– but making the observations needed to confirm this is extremely difficult. Now, physicists in the US and Canada have calculated that information about black-hole hair could be extracted from the gravitational waves that are created just after two black holes merge to form one larger black hole. The new black hole begins its life as a rotating distorted sphere that changes shape until it becomes a sphere in a process called ringdown. If black holes are bald, the gravitational waves emitted during ringdown should be as expected for a black hole with specific values of mass, angular momentum and electrical charge. Any deviation would point to the existence of hair. While a LIGO measurement of the ringdown of an individual black hole is too noisy to provide a definitive answer, Huan Yang of Princeton University and colleagues have worked out that ringdown data from a number of different black holes could be combined to reveal the presence of hair. Writing in Physical Review Letters, they say that the answer could come after one year of observation time once the LIGO detectors have been upgraded to their ultimate design sensitivities.
Extra-terrestrial life search comes up cold
A year-long search for signals from alien civilizations has yet to find any evidence for the existence of intelligent life on other planets. Funded by the physicist and billionaire investor Yuri Milner, the Breakthrough Listen initiative has acquired several petabytes of data using the Green Bank Radio Telescope in West Virginia, Lick Observatory’s Automated Planet Finder in California and the Parkes Radio Telescope in Australia. These data are being analysed by researchers at the SETI Research Center at the University of California, Berkeley, who are scanning through billions of radio channels in a search for unique signals that might indicate the presence of technology developed by extra-terrestrial civilizations. The team has now released an analysis of data from the Green Bank telescope. This identifies 11 “events” in the 1.1–1.9 GHz band that have the highest likelihood of being associated with alien technologies. These are signals with features not expected from astronomical sources such as narrow bandwidth or certain patterns of pulsing or modulation. However, further detailed analysis of the 11 events suggests that it is unlikely that any of them were created by the technology of a distant civilization. “Although the search has not yet detected a convincing signal from extra-terrestrial intelligence, these are early days,” said Berkeley’s Andrew Siemion. “The work that has been completed so far provides a launch pad for deeper and more comprehensive analysis to come.”
You can find all our daily Flash Physics posts in the website’s news section, as well as on Twitter and Facebook using #FlashPhysics. Tune in to physicsworld.com later today to read today’s extensive news story on science policy and the Trump presidency.
Printed electronics are viewed as a cost-effective and scalable route to new technologies, although the performance of these devices tends to rely less on the printer and more on the ink. Researchers at Trinity College Dublin, in collaboration with scientists at Delft University of Technology and Toyota Motor Europe, have now fabricated vertically stacked thin-film transistors (TFTs) from dispersions of two-dimensional nanosheets. Combining high performance with ease of manufacture, these nanosheet-based TFTs have the potential to compete with organic and nanotube-based electronics.
The nanosheet inks are made using a process known as liquid-phase exfoliation, where layered materials in bulk form are broken down and dispersed in liquids. Over the last 10 years this has become an established method for efficiently producing a whole library of two-dimensional materials. Due to the widely varying properties of these nanosheets, every component in the TFT can be printed: conducting graphene nanosheets are used for the electrodes, semiconducting transition-metal dichalcogenides such as molydenum disulphide or tungsten diselenide form the channel, and a boron nitride (BN) dielectric layer acts as a separator.
When printed, the inks form porous nanosheet networks (PNNs) that can be printed layer-by-layer. Their high porosity allows for the use of electrolytic gating, where liquid electrolyte contained within the network is used as a gate dielectric. During operation, ions in the liquid accumulate at the boundary between the electrolyte and the active material due to the application of a gate voltage. Charged ions at the interface then separate, forming an electrostatic double layer and inducing current to flow through the external circuit.
Nanosheet networks printed in vertical stacks function as thin-film transistors.
The research team, led by Adam Kelly and Toby Hallam, measured the electrical transport characteristics of different PNNs using a simple set up involving gold electrodes and an ionic liquid electrolyte (Science356 6333 69). They found that the transconductance, a measure related to the gain a transistor is capable of delivering, is directly proportional to the thickness of nanosheet network. This allows the electrical characteristics of the device to be tuned by their printing conditions, with transconductance values as high as 6 mS reported by the team.
Large capacitance values were also measured for thick PNNs, due to the large amount of free volume available for ion adsorption. This gives these devices transport properties similar to those of benchmark TFTs, though such high capacitances do hinder switching times. Fortunately, the researchers believe that further experimentation with the ionic liquid electrolyte will enhance switching speeds.
Building on these results, the team built a fully functional TFT using only porous nanosheet networks: graphene electrodes, a tungsten diselenide channel and a BN separator. These vertically stacked devices have on:off ratios of more than 25 and a transconductance of 22 μS. Such transfer characteristics are promising for devices that are still the early stages of development, and further improvements should be possible in the future.
In February 2016 researchers at the Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) in the US announced a ground-breaking discovery – on 14 September 2015 they had made the first ever direct detection of gravitational waves. After decades of trying to observe these ripples in space–time, the scientists had at last addressed the final unverified prediction of Einstein’s general theory of relativity. Success was quickly followed by success and a few months later a second detection was reported.
In both cases (called GW150914 and GW151216 respectively), as well as a less statistically significant event (LVT151012), the gravitational waves were produced by two stellar-mass black holes in a binary orbit that merged to form one larger black hole. While the detection events are a significant breakthrough, they are still shrouded in mystery. “Previous to this, we never observed a black hole binary system, which leads to the natural question – how did these come to be?” says LIGO scientist Amber Stuver, who was not involved in this current work.
So far, several scenarios have been proposed but they struggle to explain all observed events under one framework. Now, however, scientists at the University of Birmingham in the UK and the University of Amsterdam in the Netherlands have developed a model that can describe all three events via one evolutionary path.
Close together
Before LIGO’s detections, it was thought that stellar-mass binary black-hole systems would either not form at all or, if they did, they would be too far apart to merge within the age of the universe. For two black holes to merge within the age of the universe, they have to begin very close together by astronomical standards – no more than a fifth of the distance between the Sun and Earth. But black holes are produced by massive stars that expand to be much larger than this distance during their stellar evolution.
To solve this problem Simon Stevenson from Birmingham’s Gravitational Wave Group and colleagues developed a simulation platform called Compact Object Mergers: Population Astrophysics and Statistics (COMPAS). “It is a tool for both predicting the evolution of massive stellar binaries and statistically comparing these predictions against observations,” explains team member Ilya Mandel.
Using COMPAS, the group propose an “isolated binary evolution via a common envelope phase”. This means that two massive stars begin with a quite wide separation. As these stars evolve and expand over time, they interact and undergo several episodes of mass transfer, the last of which is called a “common envelope”. This is a very rapid, unstable transfer that envelops both stellar cores in a dense cloud of hydrogen gas. The formation and subsequent ejection of this shared gas cloud is strong enough to take energy away from the orbit, bringing the stars close enough to merge. At this stage in their evolution, the stars are small enough in volume not to be in contact with each other despite their proximity, and they subsequently continue orbiting before merging as black holes billions of years later.
Wind loss
To reach this model, Stevenson, Mandel and colleagues had to make a series of assumptions about physical processes that govern stellar and binary evolution. For example, astronomers do not know the extent to which very massive stars expand and how much mass they lose through winds during evolution. With COMPAS, the researchers produced stellar binary models based on their assumptions and computed their statistical properties. They could then compare these predictions to the observational data and make adjustments accordingly.
“There are a lot of basic assumptions made to come to these results and many more to test,” comments Stuver, “but it is impressive that this one evolution scenario can explain all three of the gravitational wave events.”
“There are a lot of basic assumptions made to come to these results and many more to test,” comments Stuver, “but it is impressive that this one evolution scenario can explain all three of the gravitational wave events.”
As well as providing an explanation of the binary process, the simulation has also helped the team understand what type of stars can form such systems. They suggest that the massive stars have low metallicity, meaning they are almost entirely made up of hydrogen and helium. While 2% of the Sun is other elements, these massive counterparts would contain only 0.1%.
Robust framework
Writing about their proposed model, presented in Nature Communications, Mandel says that, “while [the work] doesn’t yet prove that this is indeed the dominant formation channel for forming merging binary black holes, and while this is almost certainly not the unique channel for doing so, it does allow us to build a robust framework for analysing future observations.”
The researchers hope to improve their model and figure out which assumptions are right by using data from other stellar systems, such as neutron-star binaries, supernovae and X-ray binaries. “The long-term goal is to combine all of these observations to understand how massive stars and binaries evolve,” says Mandel. “We very much look forward to further LIGO detections, and to incorporating other rich observational data sets, in order to gain a better understanding of the lives (and deaths) of massive stars.”
If you’re finding the pace of geopolitical news a bit too rapid at the moment, spare a thought for physicists and engineers working in the nuclear energy sector.
Towards the end of last month, the venerable energy firm Westinghouse Electric issued a press release in which it proudly announced that its AP1000 reactor – a relatively new “passively safe” design in which the reactor core is kept cool without the need for powered pumps or other “active” equipment – had passed a major UK regulatory review. Ordinarily, this would be cause for celebration. The so-called “Generic Design Assessment” process takes years, and completing it helps pave the way for building AP1000s within the UK. An international partnership called NuGen has long hoped to do just that, on a site near Sellafield in north-west England, so in normal times, you might expect it to be celebrating, too.
Sensors made from carbon nanotubes could offer a superior alternative to current activity tracking technology. Devices developed at the University of California, San Diego, are flexible enough to bend within layers of fabric, and can track specific types of motion as well as recording vital signs such as temperature and heart rate. These developments are promising for real-time monitoring of patients living at home, through telemedicine and those working in extreme environments, like astronauts in space.
PhD student Long Wang and Professor Kenneth J Loh, researchers in the UCSD’s department of structural engineering, produced the flexible sensors using carbon nanotubes (CNTs) and fabric with an ingenious, cost-effective manufacturing process (Smart Mater. Struct.26 055018). An ink consisting of CNTs and latex is mixed together and sprayed onto glass, where it is then annealed to produce a freestanding thin-film network of nanotubes. Once two electrodes have been attached, the film is then sandwiched between two layers of fabric and ironed together to produce a flexible fabric sensor.
These multipurpose sensors not only record heart rate, but they can also accurately track motion in a finger and even monitor respiration. When attached to a finger, the flexible nature of the thin film allows the sensor to bend and flex as the finger moves. This change in shape alters the electrical resistance across the film, which makes it possible to determine the bending angle.
Similarly, if strapped around the torso, the film changes in shape as the chest expands during inhalation, subsequently changing the resistance. When the person breathes out, the film returns to its original shape and the resistance drops again.
The sensor can also be used to monitor skin temperature. As the film is heated, there is again a change in the electrical resistance. These changes can be calibrated using measurements taken with a thermocouple, which allows the resistance change to be converted to the temperature of the skin.
Sensors like this are useful beyond the realms of logging the stats of your run or cycle ride. It’s still early days, but Wang and Loh have successfully combined an easy fabrication route with the versatility needed to take a number of important measurements. Their results show that the technology is on track to be used extensively for easy, non-invasive, real-time monitoring of individuals in a variety of environments.
Tiny glass castles and pretzels made by 3D printing
Researchers in Germany have created tiny glass pretzels and castles using 3D printing. Although glass plays a vital role in our day-to-day lives, it is notoriously difficult to shape. Making large glass objects requires high temperatures for melting and casting, and microscopic features involve the use of hazardous chemicals. As a result, modern manufacturing methods such as 3D printing have not been used for glass – but now a team at the Karlsruhe Institute of Technology (KIT) in Germany have found a solution that uses a readily available printing setup. By mixing a curable monomer with silicon dioxide powder, Bastian Rapp and colleagues created a nanocomposite mixture dubbed “liquid glass”, which becomes solid under ultraviolet (UV) light. The team used the liquid glass as “ink” in a stereolithography 3D printer – a standard setup that uses laser light (in this case UV light) to solidify the printed structure. The resulting 3D composite was then heated to 1300 °C to convert it into fused silica glass. Using this method, Rapp and team were able to create smooth and transparent structures with micron-sized features, including a microfluidic chip, a honeycomb structure, a castle and a pretzel. Coloured glass could also be created by simply incorporating metal salts into the liquid glass ink. As glass has a number of useful properties – including optical transparency, thermal and electrical insulation and chemical resistance – it is an important material in industry and scientific research. Therefore, being able to easily create macro- and micro-structures through modern techniques opens up possible new manufacturing routes and materials. The method is presented in Nature.
Laser fusion produces more neutrons
Lighting up: simulation of a laser pulse interacting with the hohlraum. (Courtesy: G Ren/IAPCM and J Yan/LFRC)
Physicists in China have unveiled a new way of creating neutrons by firing a powerful laser at a hydrogen target. Capable of producing 100 times more neutrons than current laser techniques, the method has been developed by Jie Liu of the Institute of Applied Physics and Computational Mathematics in Beijing and colleagues. It involves using a laser pulse to heat and compress a capsule (or hohlraum) containing deuterium. The intense heat causes pairs of deuterium nuclei to fuse in a process that gives off neutrons. Called inertial confinement fusion, the technique has already been investigated as a potential source of neutrons. However, the inherent instability of the process has meant that previous schemes were inefficient or unreliable sources of the particles. Liu and colleagues improved the stability of the process by using a new scheme called spherically convergent plasma fusion. This uses a spherical hohlraum with a thin gold wall that is coated on the inside with polystyrene containing deuterium. The laser pulse drives the deuterium to the centre of the hohlraum, where fusion occurs. Using a 6.3 kJ laser pulse with a duration of about 2 ns, the team was able to produce around one billion neutrons per pulse – about 100 times more than previous methods could achieve. Writing in Physical Review Letters, Liu and colleagues point out that using a target containing deuterium and tritium could boost the neutron output by an additional factor of 1000 – and even more if a higher-power laser is used.
Cash boost for UK’s MAST tokamak
Cored-out apple: the MAST tokamak is being upgraded. (Courtesy: Culham Centre for Fusion Energy)
The Mega Amp Spherical Tokamak (MAST) at the Culham Centre for Fusion Energy (CCFE) in Oxfordshire has received £21m for a series of upgrades to study the best way to extract waste fuel from the plasma it contains. MAST has a spherical plasma, shaped much like a cored-out apple, whereas a conventional tokamak such as ITER has a doughnut-shaped plasma. A spherical tokamak allows for a much more compact – and cheaper – device and it is hoped that this kind of tokamak could one day be used as a potential fusion reactor. MAST is nearing the end of a £45m upgrade that will see the tokamak given a new “divertor”, which extracts the waste fuel from fusion. Called “Super-X”, it is hoped that the new divertor could even be used in a future demonstration fusion plant – dubbed DEMO. The new cash, from the European Fusion Research Consortium and the UK’s Engineering and Physical Sciences Research Council, will be used to increase the tokamak’s plasma heating power as well as upgrade the plasma control systems and add extra plasma diagnosis equipment. These improvements are set to be introduced over the next five years.
You can find all our daily Flash Physics posts in the website’s news section, as well as on Twitter and Facebook using #FlashPhysics. Tune in to physicsworld.com later today to read today’s extensive news story on the origin of binary black holes.
Opposition physics Albert Einstein enjoyed regularly sparring with Niels Bohr on subjects including quantum mechanics. (Courtesy: Paul Ehrenfest)
Albert Einstein’s persistent opposition to quantum mechanics is a familiar, if still somewhat surprising, fact to all physicists. It was first voiced in 1926 in his famous comment written in a letter to Max Born that “Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not bring us any closer to the secrets of the ‘old one’. I, at any rate, am convinced that He is not playing dice.”
From then, until Einstein’s death in 1955 – as he struggled without success to find a unified theory of electromagnetism and gravitation – his opposition never wavered, and made him an increasingly isolated figure in physics. “To Einstein, probabilities were just a sign of gaps in our understanding,” David Bodanis concisely observes in his latest book – Einstein’s Greatest Mistake: the Life of a Flawed Genius – the title of which refers to this opposition.
Less established are the reasons, both intellectual and personal, for Einstein’s resistance. According to Bodanis, they lie in the history of the cosmological constant, unwillingly introduced by Einstein in 1917 into his 1915 field equations of general relativity. Added as a fudge factor with a repulsive effect to balance the attractive effect of matter, the cosmological constant was meant to produce a static solution for the universe: a concept that in 1917 seemed evidently correct to astronomers. When subsequent observations of galaxies by Edwin Hubble and Milton Humason proved that the universe is actually expanding, Einstein willingly abandoned the cosmological constant around 1931 and reverted to his original field equations. He even, apparently, referred to the cosmological constant as “the greatest blunder of my life” (a comment quoted by Bodanis without reference to its somewhat doubtful source). But as a result of his volte-face, says Bodanis, Einstein became increasingly convinced of the superiority of his intuition over experiment – a view that, by the 1930s, hardened into dogmatic opposition to quantum mechanics.
Telling support for this stance, oddly unmentioned by Bodanis, comes from an Einstein lecture, “On the method of theoretical physics”, delivered at the University of Oxford in 1933, not long before he emigrated to the US. Here Einstein controversially stressed the importance of mathematics over experiment in devising physical theories by saying that “Experience can of course guide us in our choice of serviceable mathematical concepts, [but] it cannot possibly be the source from which they are derived; experience of course remains the sole criterion of the serviceability of a mathematical construction for physics, but the truly creative principle resides in mathematics. In a certain sense, therefore, I hold it to be true that pure thought is competent to comprehend the real, as the ancients dreamed.”
Nobel laureate Steven Weinberg would appear to agree with Bodanis. In “Einstein’s search for unification”, an essay Weinberg contributed to my book, Einstein: a Hundred Years of Relativity, he concludes that because general relativity had been guided by an existing mathematical formalism – the Riemann theory of curved space – perhaps Einstein had acquired “too great a respect for the power of pure mathematics to inspire physical theory. The oracle of mathematics that had served Einstein so well when he was young betrayed him in his later years”.
The most original aspect of Bodanis’ book is its attempt to explain difficult concepts in ordinary language, without, of course, resorting to mathematics. For instance, Bodanis compares curved space in general relativity to two Finnish skaters who head for the North Pole, using compasses to carefully skate in parallel, but are inevitably “pulled” together until they crash into one other at the pole. He also pictures Heisenberg’s understanding of uncertainty at the subatomic level as the experience of an audience at a 1920s Berlin operetta. The audience can work out general patterns among the actors from the type of clothes they change into for each act, without knowing exactly what the actors are doing backstage. “Heisenberg would have been convinced that what had happened backstage was inherently a blur,” suggests Bodanis. Whereas from Einstein’s perspective, “each individual actor had to be changing his or her costume”.
Less original, though also engagingly integrated with the book’s physics, are its biographical elements. These cover not only Einstein but also others such as his second wife Elsa Löwenthal, his lifelong friend Michele Besso and his sparring partner Niels Bohr. His undergraduate physics teacher in Zürich, Heinrich Weber, who Einstein rightly regarded as well behind the scientific times, told him “You are a smart boy, Einstein, a very smart boy. But you have one great fault: you do not let yourself be told anything.” For much of Einstein’s life, this self-confidence was without question a vital strength, but in his later years, argues Bodanis, it became a handicap.
Yet, as the essentially respectful Bodanis admits, even Einstein’s opposition to quantum mechanics could be fruitful. His 1935 so-called EPR paper, “Can the quantum-mechanical description be considered complete?”, written with Boris Podolsky and Nathan Rosen (neither of whom is named by Bodanis), provoked a fellow-sceptic, Erwin Schrödinger, to come up with the technical term “entanglement” and his tantalizing “cat” paradox.
Schrödinger, unlike Einstein, eventually accepted quantum mechanics as a profoundly useful method of calculation. However, the debates about its correct physical interpretation launched by the great, if flawed, Einstein, are very far indeed from being conclusively resolved. “What is quantum theory, a century after its birth?” asks Carlo Rovelli in his recent book Reality Is Not What It Seems: the Journey to Quantum Gravity. “An extraordinary dive deep into the nature of reality? A blunder that works, by chance? Part of an incomplete puzzle? Or a clue to something profound regarding the structure of the world, which we have yet to fully decipher?”
In an alternative universe, the quest for fusion energy passed a major milestone sometime in the closing months of 2012. That was the year scientists at the US National Ignition Facility (NIF) conducted a high-profile campaign to achieve the facility’s central purpose: ignition, a controlled nuclear fusion reaction that produces more energy than is required to sustain it. Reaching this point would have capped decades of research on inertial confinement fusion (ICF), which uses lasers or magnetic fields to heat and compress nuclear fuel to high temperatures and pressures. Ignition would also have been something of a vindication for NIF itself, banishing memories of the construction delays and cost overruns that plagued the multibillion-dollar laser before it opened at the Lawrence Livermore National Laboratory (LLNL) in 2009.
Early in 2012, leaders of NIF’s so-called “National Ignition Campaign” felt they had reasons to be optimistic. Computer models predicted that at laser energies of around 1.7 MJ, the tiny capsule of deuterium-tritium fuel at the heart of the NIF target chamber would begin releasing neutrons and alpha particles in unprecedented numbers, ushering in the conditions required for ignition. NIF was designed to deliver 1.8 MJ. Surely, ignition was just around the corner.
It wasn’t. In the real universe, the National Ignition Campaign ended with a whimper, not a bang. The reasons for the failure were manifold. In mid-2016 the National Nuclear Security Administration (NNSA) published a review describing them in detail. Computer codes and models predicting high energy gain from the fuel capsules were “not capturing the necessary physics”, the review’s authors wrote. Experimental efforts were “frustrated by the inability to distinguish key differences” between laser shots, with similar set-ups producing scattered results. Most damningly, the review cited a “failed approach to scientific program management” based on “circumvent[ing] problems rather than understanding and addressing them directly”. As the report’s authors concluded, “The question is if the NIF will be able to reach ignition in its current configuration and not when it will occur.”
Omar Hurricane, chief scientist in NIF’s inertial confinement fusion programme, is frank in his assessment of the failed campaign. “It was kind of like trying to swing for a home run and missing,” he says. A soft-spoken, pragmatic physicist who was working elsewhere within LLNL at the time, Hurricane explains that success would have required “an incredible amount of control” over many different parameters.
Chain of events
Consider the chain of events that must happen before ignition can occur at NIF. During a shot, the combined energy of the facility’s 192 laser beams is directed onto a hollow target about one centimetre in height and a few millimetres in diameter. This target, known as a hohlraum, contains helium gas, and at its centre is a tiny capsule filled with deuterium–tritium fuel. As the lasers heat the hohlraum, the gold coating on its interior surface begins to give off X-rays. These X-rays bathe the fuel capsule with radiation, heating it and causing material on the outside of the capsule to rocket off at speeds of hundreds of kilometres per second. Momentum conservation then forces the rest of the capsule to implode, and if the density and temperature of the imploding capsule become high enough the deuterium and tritium nuclei will fuse. “The idea for ignition is that we can actually get a propagating burn wave in the target – ‘lighting a match’ that can burn spherically outward and release more fusion energy than we put in to get it started,” NIF director Mark Herrmann explains.
The key to making that happen – and the reason it hasn’t yet – can be summed up in a single word: symmetry. The laser field outside the hohlraum is not perfectly symmetric. Neither is the X-ray field produced inside. Wherever such asymmetries exist, the pressure applied to the fuel capsule is slightly different. “You can think of it as like trying to squeeze a soccer ball down to something the size of a pea,” Herrmann says. “If you squeeze harder on one side or another you get a lima bean, or a string bean. You don’t get a pea.”
Targeted NIF researchers are developing new types of hohlraums and fuel capsules to improve the symmetry of implosions. (Courtesy: Lawrence Livermore National Laboratory)
One notable cause of asymmetric “squeezing” is a phenomenon known as cross-beam energy transfer (CBET). When laser beams overlap in the presence of a plasma, one beam can, in effect, “steal” energy from another. “You think you’ve designed the laser illumination pattern to give you a symmetric drive, and the beams kind of conspire to change that for you,” explains Warren Garbett, a plasma physicist at the UK’s Atomic Weapons Establishment who has served on review panels for ICF research. As well as creating asymmetries, CBET can also divert energy from an incoming beam to an outgoing one. Such processes are “very difficult to predict, and also very difficult to control”, says Jerry Chittenden, a plasma physicist at Imperial College London and a co-author on the NNSA report. “It’s a bit like in the film Ghostbusters: bad things happen when the beams cross.”
The behaviour of fuel capsules under pressure is also tricky to control. As weak spots in the capsule begin to give way, material flows away from them. This distorts the capsule and can even tear it to pieces before fusion has a chance to get started. The process that turns small fluid asymmetries into big ones is known as a Rayleigh–Taylor instability, and Hurricane notes that its relevance has been understood since the early days of ICF research. What wasn’t understood, he adds with a wry smile, was just how much of a barrier it would prove.
Seeking symmetry
Since the end of the ignition campaign, research at NIF has shifted away from trying to achieve ignition directly. Instead, the focus is on understanding sources of asymmetry in capsule implosions and finding ways to minimize them. One key development was to put slightly more energy into the beginning, or “foot”, of the laser pulse. The resulting implosions are less prone to Rayleigh–Taylor instabilities, and in 2014 the “high-foot” campaign reported a notable success: more energy was produced through fusion than was applied to the fuel. “This is open to interpretation, but the results seem to indicate that the ignition process is at least starting,” Chittenden says. In some experiments, he points out that the pressure at the hottest point in the deuterium–tritium fuel came within a factor of two of the value required for ignition. Unfortunately, heating the capsule more at the beginning of the pulse makes it harder to compress later, so in effect, the high-foot strategy trades greater implosion symmetry for reduced maximum pressure. Maximizing the energy yield from a shot will therefore require a change of tactics.
As the high-foot design and other modifications have improved the symmetry of capsule implosions, other causes of asymmetries have emerged. One culprit is the gossamer-fine membrane that holds the fuel capsule in place inside the hohlraum. Recent experiments have shown that as X-rays hit this tent-like structure, it explodes, creating a pulse of pressure at the point where material strikes the fuel capsule. Another source of asymmetry is the tiny glass tube used to fill the capsule with fuel. Although the tube is only about 10 µm in diameter, Hurricane says they can still see its effects in their data. Fixing these problems will likely involve a combination of engineering and physics, with engineers working to make the tent and fill tube less intrusive while physicists search for ways of compensating for their effects. “It’s a systematic process,” Hurricane says. “We’re digging deeper and deeper into the implosion. Every time we do that, we see an improvement, but there’s also the chance of seeing yet another problem. We’re just trying to knock them off one at a time.”
Other labs, other prospects
Between now and 2020, the plan is for researchers at NIF to map out the effects of different laser pulse shapes; different densities of helium gas inside the hohlraum; and new materials for both hohlraum and fuel capsule. The goal, as defined in the NNSA’s latest framework for inertial confinement fusion research, is “to determine the efficacy of NIF to achieve ignition and, if this is found to be improbable, to understand the reasons why”.
If the answer turns out to be negative, one possible way forward would be to reconfigure NIF so that its lasers heat fuel capsules directly, rather than via a hohlraum. Cutting out the intermediate step would increase the energy applied to the capsule by a factor of 10, albeit at the probable cost of increased CBET and other laser-related sources of instability. Experiments on this “laser-driven direct drive” method are currently under way both at NIF and at smaller-scale laser facilities such as the University of Rochester’s Laboratory for Laser Energetics (LLE). Converting NIF to direct drive is “something we’re considering”, Hurricane says, but he adds that doing so would require new funding, since NIF’s lasers are not designed to produce a spherically symmetric laser field. Also, with current technology, results at the LLE suggest that direct-drive fusion experiments at a NIF-scale facility would come within a factor of two of the pressures required for ignition. This, as Chittenden points out, is similar to what NIF can do already.
Meanwhile, scientists in France are watching NIF’s progress with interest. The design of the Laser Mégajoule (LMJ) facility near Bordeaux is similar to NIF’s – Jean-Luc Miquel, programme leader for laser-plasma experiments at the LMJ, calls them “brother lasers” – and it will be capable of similar energies once it is fully operational. Currently, only 16 of the LMJ’s planned 176 beams are in use, with an additional 40 scheduled to come online by 2019. The final completion date will be sometime at the beginning of the next decade, with government funding and the needs of the French nuclear-weapons programme dictating the pace of progress. (Both the LMJ and NIF perform experiments on weapons physics in addition to ICF research, and the French programme draws little distinction between the two.) Currently, work at the LMJ is focused on lower-energy studies of hohlraum energetics, radiation transport and hydrodynamic instabilities, with more topics to be added as additional beams become available. “For us, ignition is the ultimate goal of the LMJ, but there is a lot of physics that can be addressed before then,” Miquel says.
At other laboratories, ignition is even further away. A variant of ICF that uses magnetic fields to compress fuel is being pursued at the Sandia National Laboratory in New Mexico, but reaching ignition via this method would probably require a next-generation facility. An alternative laser-based approach, known as “fast ignition”, is being studied at the LLE and at Osaka University in Japan, with larger facilities planned that would determine whether promising results at low energies translate into ignition at high energies. Plans for even bigger lasers, on the 3–4 MJ scale, have also been put forward in both Russia and China. So far, however, these facilities exist only on paper.
For the moment, NIF remains the world’s best hope for ignition, and while the fevered expectations of 2012 have dissipated like a vaporized hohlraum, there is plenty of optimism left within the ICF community. “Looking at the history of ICF, quite often facilities have been built expecting to get ignition, and then more physics has been discovered, and they find out they are some way away,” Garbett says. “But NIF is really on the cusp.” In his own calm, deliberate way, Hurricane is also upbeat. “The fact that we can actually see problems that we have a chance of fixing gives me hope that we can make progress,” he says. “That’s a much better situation to be in than ‘It’s still not working, we can’t see anything wrong, and we don’t know what to do about it.’ It’s not going to be easy, it’s going to be a lot of work, and we’re going to have to fight for every bit of progress. But scientists like solving problems – it’s part of the job.”
An exoplanet orbiting a nearby red-dwarf star may be the “best place to look for signs of life beyond the solar system”, according to the team of astronomers that has discovered the rocky world. The exoplanet is called LHS 1140b and is located just 39 light-years from Earth. It has a density that suggests that it has a rocky surface with iron core. LHS 1140b is also in the habitable zone of its star and the astronomers say that it could have an atmosphere.
The new exoplanet orbits LHS 1140, which is a faint red dwarf that is much smaller and cooler than the Sun. LHS 1140b was discovered by a team led by Jason Dittmann of the Harvard-Smithsonian Center for Astrophysics. The researchers used the MEarth facility in Arizona to detect dips in the starlight from LHS 1140, which occur when the exoplanet passes between the star and Earth. Then the European Southern Observatory’s HARPS instrument was used to measure the mass and density of LHS 1140b.
The exoplanet takes 25 days to orbit its star and despite being 10 times closer to LHS 1140 than Earth is to the Sun, it only receives about half the “sunlight” that Earth does. Despite this lack of stellar energy, LHS 1140b is in the habitable zone of its star, which means that it could support life.
Magma oceans
When red-dwarf stars are young they are known to emit radiation that would damage the atmosphere of an exoplanet. However, Dittmann and colleagues believe that LHS 1140b is large enough to have sustained a magma ocean on its surface for millions of years. Such an ocean could have fed steam into the atmosphere, replenishing its stock of water and ensuring that its atmosphere survived the early radiation bombardment.
“The present conditions of the red dwarf are particularly favourable – LHS 1140 spins more slowly and emits less high-energy radiation than other similar low-mass stars,” says team member Nicola Astudillo-Defru from Geneva Observatory, Switzerland.
Exciting exoplanet
Dittmann adds: “This is the most exciting exoplanet I’ve seen in the past decade. We could hardly hope for a better target to perform one of the biggest quests in science – searching for evidence of life beyond Earth.”
Astronomers will now use the Hubble Space Telescope to try to work out how much life-destroying radiation is currently being showered upon LHS 1140b. Studies using future instruments such as the Extremely Large Telescope could shed further light on the exoplanet’s atmosphere and its capacity to sustain life. LHS 1140b is described in Nature.