Humans have been trying to keep things cold for a very long time. As far back as the 18th century BC, the Sumerian ruler Zimri-Lim was ordering subordinates to build him an icehouse. The story of Zimri-Lim’s chilly construction (and the finicky tastes of one of his rivals, who demanded that ice be washed “free of twigs and dung and dirt” before being added to drinks) is just one of many engaging anecdotes in Tom Jackson’s book Chilled: How Refrigeration Changed the World and Might Do So Again. As Jackson explains, there is some fascinating science as well as history in the various mechanical, chemical and physical methods that people employed to make objects cooler before the advent of modern refrigeration. Jackson’s integrated approach to his subject is as refreshing as a cool beverage on a hot summer’s day; however, a few of his tales are rather tangential to the main story, and at times it also seems that he has bitten off more than he can chew. The chapter on the complex chain of technologies required to keep cold food flowing to Western supermarkets is fascinating, but it could have been a whole book in itself. Meanwhile, some more recent milestones in the history of cold are covered too briefly to do them justice (Bose–Einstein condensation, for example, is dealt with in a mere four pages). There are a couple of factual lapses too, as when Jackson repeats the myth that ordinary wine and beer are alcoholic enough to kill “most” of the germs in them. (Try leaving a pint of beer out for a few days, uncovered, and watch what happens to it.) These criticisms aside, though, his tale of how scientists both famous (Isaac Newton) and less well known (William Cullen) have grappled with the nature of cold and temperature makes enjoyable and not-too-heavy reading.
2015 Bloomsbury Sigma £16.99hb 272pp
Security breach
In the early hours of 28 July 2012, an unlikely trio of saboteurs – two men in their late 50s or early 60s and, most famously, an 82-year-old nun – broke into Y-12 National Security Complex in Oak Ridge, Tennessee and walked unhindered up to the building that houses America’s stockpile of weapons-grade uranium. How did they get there? The question has both a philosophical answer and a practical one, and in Gods of Metal, the investigative journalist Eric Schlosser begins with the former. The three saboteurs were, he explains, heirs to a decades-long tradition of anti-nuclear activism among a small but dedicated group of radical US Catholics. They hoped that their protest would help hasten the end of the US nuclear-weapons programme. As for the practical answer, Schlosser details how the trio’s entry to the so-called “Fort Knox of Uranium” was made possible by months of careful planning, a pair of bolt cutters and a cavalcade of security lapses that would be laughable if the implications were not so serious. Schlosser’s analysis of these lapses makes up the heart of his story (which was originally published as an article in the New Yorker and has been extended only slightly in book form). Security at Y-12 has been tightened considerably since the incident, and the private companies responsible for some of the worst failings have not had their contracts renewed. That, however, is small comfort to those whose nuclear-security concerns centre on terrorism rather than war. As Schlosser warns, “If terrorists manage to steal weapons-grade uranium or plutonium from a Department of Energy facility because of a contractor’s mistakes, the firm responsible for the security breach stands to lose its contract. The United States could lose a city.”
2015 Penguin £1.99pb 128pp
Nerding out
For evidence that science is having a “moment” in pop culture, one need look no further than the Festival of the Spoken Nerd. This comedic trio – Helen Arney, Steve Mould and Matt Parker – have been touring the UK with their brand of science comedy for five years, and in the DVD version of their show (titled, drolly enough, Full Frontal Nerdity) they attempt to answer a time-honoured question: what is a nerd? Is it someone who does experiments? Someone who devotes a lot of time and effort to mastering an obscure skill? Or is it someone who is entirely too fond of Excel spreadsheets? In the DVD (recorded in early 2015 during two of their live shows in London), each member of the group takes one of these definitions and runs with it. Arney’s speciality, for example, is scientific song parodies, while Mould does mildly unsafe-looking experiments with household objects and Parker gets enthusiastic about graphs. It’s all good fun, and considerably less laboured than it sounds on paper, although (as with a lot of science outreach events) one does get the impression that their audience has self-selected from a fairly geeky segment of the population. Even if they are preaching to the choir, their message is fun to hear – and if you live in the UK, you can catch it in person this autumn as their new live show, Just For Graphs, begins touring.
How and where do new ideas in physics emerge? We often think they arise serendipitously, which is why we love stories like Newton discovering gravity after seeing an apple fall. The reality, though, is often very different.
Writing in the September 2015 issue of Physics World magazine, which is now out, theoretical physicist Vitor Cardoso from the University of Lisbon explains his efforts to find out how breakthroughs – both big and small – really emerge. As he discovered through his project The Birth of an Idea, it turns out that how new thoughts arise is often much more of a communal activity than we might think.
In the dry scrubland of eastern Washington State, a few miles from what was once America’s premier plutonium factory, sits a massive laboratory, its two long arms stretching off into the distance. On windy days, tumbleweeds roll by, piling up against the arms’ concrete housing and creating headaches for the lab’s maintenance workers. Inside, though, there is a buzz of activity as scientists at the Laser Interferometer Gravitational-Wave Observatory (LIGO) prepare for the most exciting period in the facility’s 14-year history. Later this month, they will begin observations with an upgraded machine, new instruments and a corresponding sense that this time, when they go on a gravitational-wave hunt, they’re going to catch a big one.
Gravitational waves were predicted by Albert Einstein in 1916 as a consequence of the field equations of his general theory of relativity. These 10 coupled nonlinear equations formulate the universe as the dynamic interplay between mass–energy and space–time. As the physicist John Wheeler put it, “Matter tells space how to curve, and space tells matter how to move.” One prediction of the general theory is that when massive bodies move around, they cause the fabric of space–time to warp, generating ripples that propagate outwards at the speed of light. These ripples are known as gravitational waves, but they are not the familiar sinusoids found in electromagnetism. Instead, they stretch space in one direction perpendicular to the line of travel, while simultaneously compressing it in the other – a bit like lips puckering up and down for a kiss.
No prediction made by Einstein’s equations has ever been proved wrong, and in the 1970s observations of a binary pulsar – a rapidly rotating neutron star in orbit around another neutron star – strongly suggested that gravitational waves do indeed exist (see “Pulsar detectives” below). However, nobody has ever detected such waves directly, despite decades of trying.
LIGO was built to change that. From 2002 until 2010, laser beams travelled from the lab’s hub down its two long, perpendicular arms, where they reflected off huge, hanging masses and were recombined back near their origin. The idea was that a passing gravitational wave would cause the masses to move enough for the length of the arms to change and produce a detectable phase shift in the interference pattern of the recombined laser beams. On a handful of occasions during LIGO’s first period of experimental operations, researchers thought they had spotted such a shift – only for the supposed signal to be revealed as noise or, in one case, a deliberate fake generated by researchers within the collaboration, as a test of their internal data-checking procedures.
Now, however, the LIGO facility near Hanford, Washington – along with its twin in Livingston, Louisiana – is entering a new era. In March contractors completed a $221m upgrade of the dual facilities that improved their ability to detect the feeble waves of gravity by a factor of 10. Thanks to this upgrade – known as Advanced LIGO, or aLIGO – researchers should be able to detect gravitational waves that originate anywhere within a sphere of about 420 million light-years in radius, centred on the Earth. That is still only a small fraction of the total universe, but it’s a thousand-fold increase (by volume) on what was possible before the upgrade. As the upgraded system kicks into high gear and achieves its designed sensitivity in 2016 or 2017, the scientists at LIGO are quietly confident that they will see something real.
Only an attometre
The recently completed aLIGO upgrades all have the same goal: reducing noise. Noise poses challenges for many physics experiments, of course, but for the dual LIGO machines and their interferometric kin, the problem is particularly acute. Although gravitational waves come from some of the most massive and energetic systems in the universe (such as a pair of black holes or neutron stars orbiting one another) their amplitudes are exceedingly small by the time they reach Earth. In fact, a passing gravitational wave is expected to change the length of LIGO’s 4 km-long arms by only a few attometres (10–18 m) – around 1000 times less than the diameter of a proton (see “How LIGO works” below).
To ensure that the observatory can detect such a tiny change, almost every aspect of LIGO has been upgraded. For starters, a new isolation system has been installed to keep seismic noise (caused, for example, by passing trucks or tiny earthquakes) negligible across the range of frequencies that interest would-be wave observers. The dual US facilities are vital here, since noise seen in one facility but not the other can be ruled a local hiccup, not a passing gravitational wave.
Quiet room Putting the new seismic isolation system in place. (Courtesy: LIGO Laboratory)
At higher frequencies, though, the performance of the LIGO detector is limited by shot noise, which arises from the quantum nature of light. Basically, the number of photons produced by the laser fluctuates with time, creating a degree of uncertainty in the amplitude and phase of the beam. Increasing the laser’s power mitigates this problem somewhat, because the signal produced by a passing gravitational wave varies in proportion to the power, while the shot noise is proportional to the square root of the power. Accordingly, aLIGO boosts the power of the facility’s laser by more than an order of magnitude, from an initial input of 10 W to about 200 W.
That, however, creates a new problem. Each of those laser photons packs a tiny momentum punch and, collectively, they generate enough radiation pressure to make the masses at the end of each arm twitch ever so slightly. To counteract this, the masses have been beefed up: the aLIGO masses are both larger in diameter (34 cm instead of 25 cm) and more massive (40 kg instead of 11 kg) than before. That reduces their radiation-pressure-induced motion down to a level comparable to the thermal noise in the wire that suspends them – noise that has itself been reduced by replacing the old steel wire with fused silica fibres.
Putting the squeeze on
To really get a handle on shot noise, though, you need ingenuity as well as more powerful lasers and bigger masses. This is where the expertise of physicists like Sheila Dwyer comes in. A postdoctoral researcher at LIGO’s Hanford site, Dwyer began working at the lab in 2010, when she was a PhD student at the Massachusetts Institute of Technology. Her graduate work (conducted with the astrophysicist and gravitational-wave detection specialist Nergis Mavalvala) spanned quantum optics, quantum measurement theory and gravitational-wave detection, and it prepared her to play a central role in one of the most important aLIGO upgrades: the switch to a laser that emits light in an exotic form called a “squeezed state”.
Like all particles, the photons in squeezed light obey the uncertainty principle: the product of the uncertainties in any two complementary properties (such as amplitude and phase) always equals or exceeds ℏ/2. What makes squeezed light special is that the uncertainty in one of these variables has been “squeezed” down, while the uncertainty in the other is correspondingly allowed to balloon upwards. At LIGO, the fluctuations in phase are squeezed, so that the phase shift of the recombined beam can be measured more precisely. Of course, this means that the fluctuations in the beam’s amplitude become relatively large – making it all the more important that the mirrored masses used in aLIGO be as heavy as is reasonably possible.
Dwyer explains that LIGO is not the first gravitational-wave interferometer to use squeezed light. That honour belongs to the GEO600 experiment in Sarstedt, Germany, which began working with squeezed light in 2011 and now uses squeezing in its normal operations. Physicists there found that at certain frequencies (around 3 kHz), squeezing reduced quantum noise by a third, increasing the machine’s detection rate of gravitational waves in that frequency band by a factor of (3/2)3 (about 3.4). For LIGO, Dwyer showed in her doctoral thesis that using squeezed light would increase detector sensitivity by 80%, leading to an improved detection rate of almost a factor of six.
Knowing what to look for
While experimentalists have focused on upgrading LIGO’s physical components, theorists have been improving their understanding of what a gravitational-wave signal might look like. To help identify a passing wave, numerical relativists have calculated the waveforms that several of the most likely sources of gravitational waves are expected to generate. Thanks to their efforts, any signal registered at LIGO can be compared with about 10,000 expected waveforms for gravitational waves created by binary neutron stars, around 100,000 created by binary black holes and on the order of a million waveforms from neutron-star–black-hole binaries. This last number is larger than the others because the black hole’s angular momentum can couple to the orbital angular momentum of the binary pair, producing a much more complicated waveform (see “Relativity’s new revolution“).
In addition to building the database of expected waveforms, some researchers have been developing algorithms that look for gravitational-wave signals from other, unimagined sources. This type of search (known as a “burst” search) does not make any assumptions about the waveforms it is looking for, explains Duncan Brown, a gravitational-wave astronomer at Syracuse University in New York. “That way, when the universe goes bump in the night, LIGO will feel it,” he adds.
Hanging out Testing the suspension of one of the inner optical cavity mirrors. (Courtesy: LIGO Laboratory)
Astronomers have also worked out how many events aLIGO might expect to see as it approaches its design sensitivity. Binary neutron stars are considered the most promising source of gravitational waves, and (based on observations made with conventional telescopes) researchers estimate that aLIGO could see up to three binary-star coalescences – events in which two stars merge to form a single body – in its first year of operations. In its second year, as further technical improvements expand the fraction of the universe being observed, it might see a further 20 coalescences, and perhaps as many as 200 after four years. But it could also see none at all. Although that would be a surprise, it is possible that noise could, despite the upgrades, drown out a gravitational-wave signal; that binary-star coalescences could occur less often than astronomers think they do; or even that the strong-field, nonlinear gravitational regime, where coalescences finish, lies beyond general relativity.
The collapse of a star’s core as it becomes a supernova would also generate gravitational waves, unless the collapse is spherically symmetric. However, the maximum amount of gravitational-wave energy expected in such an event is much smaller than from binary coalescences (by a factor of at least 107). This means that gravitational waves from a core collapse may be seen only if the collapse happens in our cosmic backyard: within the Milky Way or its smaller, satellite galaxies – the Large and Small Magellanic Clouds.
The era of ‘multimessenger astronomy’
Whatever its cosmic source, the first direct detection of a gravitational wave will be big news. It will confirm a prediction from general relativity, but more importantly it will also give astronomers, astrophysicists and gravity theorists entirely new information about the objects they study. Indeed, astrophysicists hope that gravitational-wave observatories will someday operate as routinely as optical telescopes do today. If that happens, gravitational waves could fundamentally alter our picture of the universe, just as radio-wave and X-ray astronomy altered it from the placid, silent galaxies Edwin Hubble observed at visible wavelengths to the raucous universe we know today, full of quasars and pulsars, black holes and neutron stars. At some point, it may even be possible to observe cosmic events such as supernovae with light-based telescopes, neutrino detectors and gravitational-wave observatories – a new type of science dubbed “multi-messenger astronomy”.
More prosaically, a detection at LIGO or another ground-based observatory could also pave the way for a more ambitious successor facility. Dwyer, the LIGO postdoc, is part of a small group working on plans for the next generation of detectors. An underground observatory with arms 40 km long could bring another factor of 10 in sensitivity, she says, and it could, in theory, detect gravitational waves generated a mere billion years after the Big Bang – corresponding to a region of space–time far bigger than aLIGO’s current sphere of sensitivity.
Still more ambitious are plans to put a gravitational-wave observatory in space. Designs for the European Space Agency’s eLISA project, for example, call for three satellites arranged in an “L” shape so that each “arm” – consisting of empty space – is 1 million km long. The separation distances would be monitored to detect gravitational waves at frequencies between 0.03 mHz – below which the spacecraft are buffeted by fluctuations in solar radiation pressure, solar wind and cosmic rays – and 100 mHz. Within this range, expected gravitational-wave sources include galactic short-period binary stars and supermassive black-hole binaries. Once gravitational-wave detection is routine on Earth, new ideas, even beyond eLISA, will surely abound. The sky – no, the universe – is the limit.
Pulsar detectives
Pulsar pair Artistic illustration of two rotating neutron stars beaming out radiation. (Courtesy: Michael Kramer, Jodrell Bank)
The existence of gravitational waves was implied in a beautiful series of observations made more than 30 years ago by Joseph Taylor Jr and Joel Weisberg, utilizing a new type of pulsar discovered by Russell Hulse and Taylor in 1974. The pulsar PSR B1913+16 is a rapidly rotating neutron star that emits electromagnetic radiation, in orbit around a neutron star that was not seen to pulse. The general theory of relativity predicts that such a system will radiate energy, E, at a rate of
where m1 and m2 are the masses of the two bodies orbiting one another in circular orbits a distance r apart. (The calculation and expression for an elliptical orbit is a bit more complicated.) Note that the radiated power, P, is negative because the system is losing energy as the two masses spiral in towards one another.
For the Earth–Sun system, this energy loss rate comes to a feeble 200 W (less than used by a toaster), and a related calculation shows that the distance between them changes by only 400 fm (10–15 m) per year. But for a binary pulsar system such as PSR B1913+16, the rate of energy loss is almost 1025 W – equivalent to about 2% of our Sun’s output of electromagnetic radiation. By monitoring the system over several years, Taylor and Weisberg found that the stars’ separation was shrinking rapidly, by about 2 cm per day. More importantly, their observations showed that the cumulative shift of the stars’ orbital periastron (the point where the stars are closest together) decreased in a way that almost exactly followed the predictions made by general relativity. Subsequent work found an even tighter agreement.
Hulse and Taylor were jointly awarded the 1993 Nobel Prize for Physics for their discovery of the first binary pulsar. The Taylor and Weisberg result, more than anything, convinced physicists that gravitational waves exist.
How LIGO works
Inside LIGO Schematic of LIGO Laboratory. (Courtesy: IOP Publishing)
LIGO and other interferometric gravitational-wave observatories (such as VIRGO in Italy, GEO600 in Germany and KAGRA in Japan) consist of two long arms built at a right angle to each other. At the end of each arm hangs a highly polished “test mass” that acts as a mirror for a laser beam that is split at its source, with separate beams reflected down each arm. If a gravitational wave from a distant source washes through the detector, it will change the distance between space–time points in the detector ever so slightly, producing an alteration in the length of the interferometer’s arms.
The magnitude of this length change ΔL will be the arm length (4 km in LIGO’s case; 2 km for VIRGO and KAGRA; 600 m for GEO600) multiplied by a dimensionless strain factor h ∼ (GM/Dc2)(v2/c2), where M is the mass of the system generating the gravitational wave, v the characteristic velocity of the system’s components (such as two black holes orbiting one another) and D its distance from the detector. The value of h varies inversely with the distance of the source, but with the most likely sources – coalescences of binary stars in nearby galaxies and superclusters – h is expected to be of the order 10–21. Hence, to detect a gravitational wave, LIGO needs to be able to measure a change in the length of its arms of about 4 × 10–18 m.
To accomplish this remarkable feat, gravitational-wave astronomers rely on some sophisticated optics. In each of LIGO’s two arms, the beam is reflected up to 400 times in a Fabry–Perot cavity, travelling a total distance many times the facility’s arm length. The two beams are then recombined at a photodetector, which measures the phase difference of the two beams. The change in the light travel time for each beam will be Δt = 2(ΔL/c) = 2hBL/c, where B is the number of bounces, creating a phase shift ΔΦ = (2π)f Δt = 4πhBL/λ – about 10–9 radians, where f is the laser frequency and λ its wavelength.
In its initial phase, LIGO was designed to detect gravitational waves with frequencies from about 40 Hz to the highest gravitational-wave frequencies expected, 10,000 Hz, with its strain sensitivity lowest at about 100 Hz. The aLIGO upgrades improve on this by shifting the observatory’s lowest detectable frequency down to 10 Hz, and boosting its strain sensitivity by a factor of 10, to values of h below 10–22. (Below about 1 Hz, even small seismic vibrations and inhomogeneities in the Earth’s gravitational field from atmospheric fluctuations create insurmountable noise.) As LIGO scientist Rick Savage puts it, “We’re far beyond splitting hairs here.”
An international research team has used carbon nanotubes to enhance the efficiency of laser acceleration, bringing table-top sources for carbon-ion therapy a step closer to reality. Therapeutic ion beams are currently delivered using large, expensive particle accelerators. Laser-driven ion acceleration may one day provide a compact, cost-effective alternative – but current techniques cannot match the energy and quality of beams created by conventional accelerators.
Laser-driven ion acceleration typically works by firing high-intensity laser pulses at ultrathin diamond-like carbon foils. The light pulses strip electrons from atoms in the foil, generating a negatively charged electron plasma. This plasma creates an electric field that then accelerates positively charged carbon ions stripped from the foil.
Now, a team led by Jörg Schreiber at LMU Munich has calculated that the energy of the resulting carbon ions could be boosted by using laser pulses with a steep (few-femtosecond) rising edge. Such pulses would allow an efficient process called radiation pressure acceleration (RPA). Generating such pulses experimentally, however, is a formidable challenge. “RPA is the most efficient way to accelerate ions,” says Schreiber. “In particular, RPA promotes substantially more ions to high energies as compared to other schemes, and eventually even allows for non-exponential energy distributions.”
Pulse shaping
To create optimally shaped laser pulses, Schreiber and colleagues coated one side of a 10 nm-thick diamond-like carbon foil with a foam of carbon nanotubes. When the laser irradiates the nanotube foam, a near-critical-density plasma is formed, which acts like a lens and focuses the traversing laser pulses. “The CNT foam provides a plasma that acts as a nonlinear medium to shape the laser, both temporally and spatially, to become better suited for RPA,” explains Schreiber.
To test their approach, the researchers carried out experiments using femtosecond pulses from the Gemini laser at the Rutherford Appleton Laboratory. Comparing the temporal shapes of an incident laser pulse and a pulse transmitted through the carbon nanotube layer revealed significant pulse steepening, with the pulse rise time reduced to about 4 fs by the nanotubes. The laser intensity is also increased and reaches peak values of more than 10 times the peak vacuum-focused intensity. This extremely steep-rising edge accompanied with much higher peak intensity provides ideal conditions for RPA to occur.
The researchers recorded the ion spectra generated upon firing circularly polarized laser pulses onto diamond-like carbon films with different carbon-nanotube foam thicknesses. They found that ion energies increased with increasing carbon-nanotube foam thickness. The best performance was observed for the thickest layer (5 µm), which increased the maximum energy of accelerated carbon ions by approximately a factor of three over an uncoated diamond-like carbon foil – from 80 to almost 240 MeV.
Boosted output
This maximum energy (20 MeV per nucleon) is significantly higher than previously attained by laser-driven ion acceleration, and makes experiments on cells with beams of carbon ions feasible for the first time. However, energies of at least 1 GeV will be required for clinical applications – about five times higher than that attained in this work.
According to the researchers, boosting power output to this level is not impossible. Future experiments will exploit the 3 PW ATLAS-3000 ultrashort pulsed laser, which will be located at a new Centre for Advanced Laser Applications (CALA) being built in Garching. Combined with the energy enhancement from the nanotube-coated foils, this system could help make laser ion acceleration a more viable tool.
The team also plans to advance from proof-of-principle experiments demonstrating the creation of 20 MeV/atomic-mass-unit carbon ions in a few shots, towards experiments with ion bunches. This will include cell experiments and, in the near future, also small-animal studies.
“In parallel, we are scattering the globe to advertise the state-of-the-art of laser acceleration, to raise awareness among potential applicants in various fields of science,” adds Schreiber. As for whether laser-driven ion acceleration will ultimately enable a low-cost particle-therapy system, Schreiber says that this is a tough call to make. “The challenge is not simply to make a cheaper accelerator – laser acceleration should provide some new quality that is not or hardly accessible by other means,” he says. “This feature is certainly the bunched nature and the synchronicity to other laser-driven sources of radiation. Even the multi-ion species available in one shot could turn out to be a benefit. The next years will be exciting as we approach medically relevant energies and exploit the first applications that utilize these special features.”
Cocoa conch: a chocolate’s distinctive flavour and texture comes from “conching”. (Courtesy: iStock/deyangeorgiev)
By Michael Banks, Tushna Commissariat and Matin Durrani
Chocolate, the food of the gods, is more popular now as a sweet treat than ever before. And while more and more people know their 70% cocoa from their truffles, “lecithin” still isn’t a word that pops up often. It is an ingredient that plays a key role in chocolate-making and other foods. But this fatty substance has long confounded food-scientists and confectioners alike – we don’t know how this ingredient works on a molecular level and confectioners have had to rely on observations and trial-and-error methods to perfect recipes.
Now, though, chocolatiers have had help from an unexpected field – that of molecular biology – to figure out chocolate “conching” – the part of the chocolate-making process where aromatic sensation, texture and “mouthfeel” are developed. In a special issue on “The Physics of Food” published in the Journal of Physics D: Applied Physics, Heiko Briesen and colleagues at Technische Universität München, Germany, use molecular dynamics to model and simulate how lecithin molecules, made from different sources, attach to the sugar surface in cocoa butter. “I’m quite confident molecular dynamics will strongly support food science in the future” says Briesen.
The “wonder material” graphene has another significant quality to add to its impressive list of electrical and mechanical properties: superconductivity. Physicists in Canada and Germany have shown that graphene turns into a superconductor when doped with lithium atoms – a result that could lead to a new generation of superconducting nanoscale devices.
Graphene exhibits a range of remarkable properties, thanks to its special structure – a one-atom-thick hexagonal lattice of carbon atoms. It is far stronger than steel while also flexible, and is an excellent conductor of both electricity and heat. In its pristine form, however, it is not a superconductor.
Coupling Cooper pairs
Neither is pure graphite, but in 2005 physicists showed that graphite could be made to superconduct when chemically treated, so as to create bulk materials consisting of graphene alternated with one-atom-thick layers of another element. The best performing material thus created, calcium graphite (CaC6), has a superconducting transition temperature of 11.5 K. Theorists identified the underlying mechanism for that superconductivity as electron–phonon coupling. Phonons are vibrations in a material’s crystal lattice that bind electrons together into “Cooper pairs” that can travel through the lattice without resistance – one of the hallmarks of superconductivity. It was then realized that such electron–phonon coupling might occur not just in bulk graphite compounds but also by depositing atoms of a suitable element on to single layers of graphene.
In 2012 Gianni Profeta of the University of L’Aquila in Italy and colleagues used computer modelling to predict that lithium ought to be a particularly good candidate for such doping. This came as a surprise, given that bulk LiC6 had not been shown to superconduct, but the researchers nevertheless found that the monolayer structure should promote superconductivity in two ways. The additional lattice vibrations generated by the lithium atoms should yield a high density of phonons, they said, while lithium’s donation of electrons to the graphene should strengthen overall electron–phonon coupling.
Lithium decorations
That prediction has been borne out by the latest work, which has been carried out by Andrea Damascelli at the University of British Colombia in Vancouver, together with colleagues in Europe. Damascelli and co-workers prepared their samples by growing layers of graphene on silicon-carbide substrates, and then very precisely depositing lithium atoms onto the graphene – a process known as “decorating” – in a vacuum at 8 K.
The team then studied the properties of the samples using angle-resolved photoemission spectroscopy, which exploits the photoelectric effect to measure the momentum and kinetic energy of electrons in a solid. The researchers found that the electrons were being slowed down as they travelled through the lattice, an effect that they attributed to enhanced electron–phonon coupling. Crucially, they also showed that this greater coupling leads to superconductivity by identifying an energy gap between the material’s conducting and non-conducting electrons – which is the energy needed to break Cooper pairs. At 0.9 meV, the measured value of this gap implies a transition temperature of about 5.9 K – as compared with Profeta and colleagues’ prediction of up to about 8 K.
Further checks
According to Damascelli, this result enhances graphene’s utility as model system for studying quantum phenomena, as well as showing how a wide range of electronic devices could be connected to one another via a single substrate. Indeed, Patrick Kirchmann and Shuolong Yang of the SLAC National Accelerator Laboratory in California, who were part of a group that last year demonstrated the phonon basis of CaC6 superconductivity, believe the work might eventually lead to the production of nanometre-sized superconducting quantum interference devices and single-electron superconductor quantum dots, for example. They add, however, that the result must first be confirmed, via the observation of two additional effects: graphene’s complete loss of electrical resistance and its expulsion of external magnetic fields – the Meissner effect – when cooled below the transition temperature. These measurements, says Kirchmann, “are needed to confirm superconductivity and pin down the transition temperature”.
Damascelli says that carrying out these measurements will require a new way of preparing the decorated graphene – one that allows the material to remain stable at ambient conditions while exhibiting macroscopic superconductivity. “We are looking at different elements,” he says, “and at different substrate–graphene combined systems that could aid the retention of the decorating atoms.”
A separate group of researchers, including Hyoyoung Lee, Tuson Park and colleagues at Sungkyunkwan University in South Korea have observed superconductivity in samples consisting of several layers of graphene doped with lithium. The group records a transition temperature of 7.4 K, obtained via observation of the Meissner effect.
“The next milestone would be to demonstrate this signature of superconductivity in a single layer of graphene,” says Kirchmann.
As this article went to press, a preprint of a paper looking at “Superconductivity in Ca-doped graphene”, based on work carried out by Andre Geim – who in 2010 won the Nobel Prize for Physics together with Konstantin Novoselov for their work with graphene – and colleagues appeared on arXiv
A variety of ultrathin nanomechanical diffraction gratings that have been fashioned from the “wonder material” graphene have been created by an international team of researchers. The team wanted to reduce the thickness of such gratings to the ultimate physical limit – that of a single atom. The researchers say that their graphene gratings are 10 times thinner than previous beam splitters for atoms, molecules or clusters, and are even four orders of magnitude thinner than the width of a typical laser grating.
As quantum mechanics allows for a particle to simultaneously have a wave-like and particle-like nature, matter-wave interferometry is an essential way of studying the fundamental nature of such particle and making precision measurements. There are two basic types of beam splitters used in interferometry: amplitude beam splitters, which are based on photon recoil and are independent of the particle’s position, and wave-front beam splitters – actual mechanical or optical gratings that essentially slice a wavefront.
Slice and dice
Mechanical gratings, which are typically 100–200 nm thick, have been in use since the late 1980s, and these nanomechanical gratings are “universal” in that they can diffract everything from an electron and a neutron to larger molecules and clusters. An optical grating, on the other hand, would require very intense laser power at 200 nm, which would be difficult to produce continuously. So while mechanical gratings seem like the superior choice, they do have one fundamental stumbling block that crops up for more complex particles, such as large biomolecules, that are highly polarizable. Such particles are affected by certain perturbations that arise due to the presence of van der Waals forces between the particles and the grating walls. These interactions would “dephase” the interferometry, lowering the resolution of the interference pattern or not letting it form at all.
As a way of getting around this, Christian Brand, Markus Arndt and colleagues at the University of Vienna were keen to reduce the thickness of the grating slits as much as possible. “Graphene is the most natural candidate for that, and even though membranes of that material have existed for quite a while by now, no one had sculpted free-standing structures as we needed them,” says Arndt. The Vienna group teamed up with Ori Cheshnovsky and colleagues at the Tel Aviv University to fabricate a variety of free-standing 2D gratings.
The team made single-layer graphene, bilayer graphene, single-layer graphene suspended in a silicon-nitride grating and a carbonaceous biphenyl membrane. While such membranes have become increasingly accessible, Brand says that “it had been largely unexplored how to sculpt them on the nanoscale, such that they form stable free-standing masks.” Once the membranes were made and used as gratings, Arndt and colleagues used them to diffract phthalocyanine molecules – a commonly used blue-green dye that fluoresces when illuminated with a laser – and found that they could get fairly high-resolution interference patterns.
Accidental scrolls
The team, however, was in for a surprise when it studied its single-layer graphene gratings using high-resolution microscopy – it found that the membranes had spontaneously self-organized to form nanoscrolls. “The graphene nanoribbons had rolled up by themselves to look a bit like an array of papyrus rolls or hollow strings,” says Arndt, who adds that such nanoscrolls are interesting nanostructures, independent of their matter-wave experiments. “The discovery of the nanoscroll gratings was an accident, but we like it since we were surprised how high a quantum interference contrast we can get with them and how stable gratings they form,” he says. The researchers did manage to produce single-layer membranes that remained flat, by using nanoribbons that were no longer than 250 nm.
The van der Waals interaction is an electrodynamic attraction between neutral particles and neutral grating walls, and is caused by spontaneous quantum fluctuations of the charge distributions. This force attracts the molecules to the grating walls in the researchers’ experiments. While the effectively conservative force does not normally induce decoherence in matter-wave interferometry – because it does not store information about the particle’s path through the grating – it effectively narrows the slit. This has both positive and negative outcomes. The narrowing of the slit means that the particles going through have very high diffraction orders – a boon.
Uncertain interactions
However, the van der Waals force also disperses the particles over all these orders, meaning that the signal per diffraction order drops by a similar factor, and this is bad. This low signal is especially a problem when it comes to large and highly polarizable particles, because it brings into play the Heisenberg uncertainty principle. This is because the interaction with the slit can, in theory, make a slit recoil, thereby inadvertently revealing which path the particle took, immediately destroying its quantum nature. For this not to happen, the grating, in this case, needs to be sufficiently well-defined in position space, such that its momentum uncertainty is larger than any recoil imparted by the diffracted molecule. “What our experiment shows, however, is that high-contrast interference is still possible,” says Arndt. Indeed, this very same scenario was one of the topics that Bohr and Einstein discussed in their famous debates, and the researchers’ findings agree with Bohr’s reasoning.
The researchers say that these experiments are definitive because they do not see how anyone could make thinner masks, because it will be fundamentally difficult to have gratings with a smaller period than 50 nm. Current mechanical techniques do not have the resolution to go lower, Arndt says, and the van der Waals interactions “will become a very serious challenge, in particular for more polar particles, even for single layer graphene”, he says. “So the next goal must be to work on slow and cold and yet brilliant molecular beam sources, such that the de Broglie wavelength gets bigger. This is a very demanding challenge and we are investing substantial efforts into that project.”
Discovered in 1932, neutrons have become a powerful tool to probe the structure of materials. Being electrically neutral, these unassuming subatomic particles are non-destructive and can penetrate much deeper into matter than charged particles such as electrons or electromagnetic waves such as X-rays. As such, neutrons provide complementary information to other analytical probes, and are particularly useful for detecting light atoms and for distinguishing neighbouring elements in the periodic table.
ISIS is located at the Rutherford Appleton Laboratory in the UK and is a world-leading neutron source for research in the physical and life sciences. The facility also produces muons, which provide a complementary probe to neutrons for studies of magnetism, superconductivity and charge transport. Using a suite of dedicated instruments, ISIS produces precisely tailored beams of neutrons and muons that tell researchers where atoms in a sample are and what they are doing, allowing new materials and even integrated devices to be designed. In terms of scientific output, ISIS underpins more than 400 publications in peer-reviewed journals every year, spanning topics from alternative fuels for transportation to new pharmaceutical compounds.
Growing demands
Since ISIS produced its first neutrons 30 years ago, the facility has grown and evolved. In 2008 a second target station (TS2) was added to meet the growing demands of the bioscience and advanced-materials communities, for example. Recently, ISIS has emerged from a six-month shutdown, during which key parts of the vacuum system were refurbished in order to meet the growing demands of users.
Unlike reactor-based neutron sources, such as the Institut Laue-Langevin in France, ISIS is an accelerator-based source – one of the first of its type. Neutron production begins with an ion source that sends negatively charged hydrogen ions to a radio-frequency (RF) quadrupole accelerator, where they are focused, grouped into bunches and accelerated. The ions are then passed through a 50 m-long linear accelerator (linac), which boosts their energy to 70 MeV. After passing through alumina foil, the negative hydrogen ions are stripped of their electrons to leave protons, which are then accelerated in bunches to 800 MeV in a 163 m-circumference synchrotron. Finally, neutrons are created via spallation by firing the high-energy protons into a tungsten target to produce a wide angular spread of neutrons. These are then channelled to various instruments surrounding the target, where different experiments are carried out by more than 1000 individual users each year. Muons are produced by passing the protons through an intermediate graphite target before they reach the tungsten target.
Vacuum technology is critical for ISIS, both for minimizing beam-scattering effects in the linac and synchrotron, and for ensuring that neutrons and muons that have bombarded a sample do not undergo any further scattering effects before they reach the detectors. There are 25 turbo pumps in the linac, with pumping speeds ranging from 300–2000 l/s. The synchrotron is pumped by 54 ion pumps, which are 400 l/s triodes. Special RF-screened and all-metal gate valves are used to minimize beam disturbance and to isolate the different parts of the vacuum system. Residual gas analysers monitor vacuum systems in the linac and also keep the gas composition in each of the two target stations in check. All this equipment is operated and monitored using control systems designed and maintained by in-house support staff. While vacuum pumps and gauges with on-board electronics are becoming increasing popular, such equipment cannot be used on parts of ISIS because of the effects of radiation.
Both the linac and synchrotron operate in the high-vacuum region, typically 5 × 10–6–10–7 mbar. In addition to minimizing beam-scattering effects, this level of vacuum allows high voltages to be applied without the danger of electrical breakdown. Indeed, were the vacuum to fail to reach the required operational levels, then ISIS would not be capable of producing any neutrons or muons at all. Stainless-steel and ceramic vacuum vessels are used in the linac and synchrotron, with aluminium or indium seals used to join the vessels together. Quick-release clamps are used to minimize radiation exposure to personnel during maintenance.
A dedicated staff of just five is needed for the running and maintenance of all ISIS vacuum equipment used in the linac, synchrotron and target stations, as well as on more than 25 instruments that are operational on the two target stations. The same team is also responsible for the manufacture of extremely delicate stripping foils used in the synchrotron and for carrying out rigorous tests of materials and equipment for vacuum compatibility prior to installation.
On target Installation of new magnets and vacuum pipework on ISIS Target Station 1. (Courtesy: STFC)
The ISIS vacuum system runs 24 hours a day, seven days a week and therefore regular maintenance is essential for ensuring the long-term reliability of the system. To achieve this high standard, the entire facility undergoes a scheduled six-month maintenance shutdown approximately every four years. Shorter 10–15 day shutdowns also occur every six to eight weeks of operation. The long shutdowns are not just an opportunity to carry out general maintenance on vacuum equipment such as pumps and gauges, they are also a chance for the major upgrades necessary to keep ISIS at the forefront of neutron science. Detailed planning and consultation takes place in the months prior to a long shutdown, and a full-size mock-up of part of the working area is constructed to help staff get to grips with the space constraints that they may find themselves working under.
ISIS emerged from its latest long shutdown in February, during which major changes were made to the systems that deliver the proton beam into Target Station 1 (TS1) and also to the “EC” muon instruments. Eight new quadrupole magnets, along with the vacuum beam pipe that fits in the centre of each magnet, were successfully installed. Four of these magnets lie on each side of the 7 mm-thick graphite intermediate target where muons are created. New and more reliable beam-positioning monitors were also installed to minimize beam losses that may occur during beam set-up.
Efficiency improvements
Following a detailed in-house review, the thin aluminium window that had previously separated the muon instruments from the intermediate target was also removed. This window had survived for more than 20 years on ISIS and was probably installed to protect the muon instruments from sudden pressure rises arising from either the intermediate or main targets. Its removal, in addition to further upgrades to the remaining magnets and vacuum pipework planned over the next 24 months, should result in at least a twofold increase in the overall muon flux. This will significantly reduce the time taken to acquire data from samples, allowing more experiments to be carried out in the time allocated to each user.
During the long shutdown, work also continued on the installation of two new instruments on TS2. IMAT and ZOOM were part of a suite of four new instruments funded from a £21m award from the UK government in 2011, following an initial TS2 investment of £130m. The ZOOM vacuum tank, containing a moveable detector, has a volume of almost 50 m3 and will be used for small-angle neutron-scattering experiments. IMAT, which will be used for imaging and diffraction experiments, has the longest vacuum guide section (approximately 46 m) of all instruments on ISIS. This length is needed to optimize the energy resolution of the instrument. Both instruments will undergo detailed commissioning in preparation for user experiments later in the year.
One of the biggest challenges of working on ISIS is pumping down vacuum vessels to operational levels quickly. Long pump-down times mean less beam time for users and, in the worst case, can prevent ISIS operations altogether if a problem were to occur with the vacuum systems in the linac, synchrotron or target stations. These systems were not designed to be “baked out”, which involves heating surfaces to above 100 °C to drive out water. As a result, out-gassing from water vapour can sometimes be a problem thanks to water’s tendency to stick to exposed surfaces. At the top of the ISIS vacuum group’s wish list is therefore a compact, dry, air-cooled vacuum pump that can pump at speeds of at least 100 m3/h at atmospheric pressure and can also deal effectively with large quantities of water vapour.
With more instruments to be installed on TS2 and further vacuum upgrades planned throughout the facility, planning has already started for the next major shutdown. This will ensure that ISIS continues to meet the growing demands of both academic and industrial users, and remain at the forefront of neutron and muon science for many years to come. These developments, along with expertise gained during 30 years of ISIS operation, are also proving vital for the development of new accelerator-based neutron sources, in particular the European Spallation Source under construction in Sweden, in which the UK is a key partner.
It is remarkable to think that less than a century ago, humans had no concept of the enormity of the cosmic world around us. A few hundred years before that, we also had no concept of the minuscule scale of the microscopic world within us. Over a comparatively short period of time, therefore, the world as we understand it has grown tremendously in scale, both small and large. But how has this broader understanding reshaped our search for meaning and our perception of humanity’s role in the cosmos?
In The Copernicus Complex: the Quest for Our Cosmic (In)significance, author Caleb Scharf takes us on a thought-provoking journey through the history of human perspectives on the universe, as well as our modern understanding of our place in it. As its title implies, this book is an exploration of the Copernican principle, which states, roughly, that humans should not expect to find ourselves in a special place in the universe – we are not privileged observers. But in many ways, the book is also a rebellion against this idea. Having been knocked off our pedestal (where we’d been comfortable in our delusion of being the central beings in the universe), Scharf argues that we’ve taken the principle of mediocrity too far, to the extent that any hint that we’re special is seen as a hubristic violation of the Copernican dictum. Yet there are ways in which our Earth and our existence really are special, and Scharf encourages us to “find a way to see past our own mediocrity”.
These days, it is hard to imagine just what an enormous leap it was to declare that the Earth spins and moves through space, or what a shock it was to discover that the seemingly smooth Milky Way was made of stars. Much of Scharf’s book is spent explaining the amazing depth of knowledge we now have about the formation of the solar system, planets, stars, galaxies and even the very matter we are made from. Throughout this story, though, Scharf places scientific discoveries alongside developments in philosophy and the human side of scientific endeavour. His descriptions even explore occasions when human imagination has beaten science, and he smoothly juxtaposes discussions of fictional worlds such as Narnia and Star Wars with hard-core astrophysics.
The result is a book that (if I may borrow a phrase from Douglas Adams) speaks to the “fundamental interconnectedness of all things”. When describing how computers can calculate planetary trajectories around stars, for example, Scharf links the silicon in the computers to the reactions in the stars whose orbits the computers are calculating. Throughout the book, readers get a beautiful sense of the circularity of existence.
One of my favourite aspects of this book was the way Scharf explores all dimensions of our place in the universe. Most popular treatments of cosmology look up and say “Wow, look how big!” Scharf’s book does this, too; however, it also looks down through the microscope and says “Wow, look how small!” The book opens with the story of Antonie van Leeuwenhoek, the 17th-century Dutch scientist who looked through a primitive microscope at a drop of water and saw creatures living inside it. At the same time telescopes were revealing the scope of the cosmos, microscopes were revealing the surprising world of life on tiny scales, and in substances such as water that we had always assumed were devoid of life. For me, it provokes a question: When we find life on distant planets, will it be more surprising than discovering life through a microscope? Or less?
Scharf doesn’t stop after exploring the extremes of size. He also explores the extremes of time and even the extremes of life. Our significance, he argues, hinges not only on where we stand in the spectrum of life on this planet, but also on our place among the potential life that might exist somewhere else in the universe. But just how fertile is the universe exactly? Are we alone, or only one among many? And if other life exists, how different from us can it be before it ceases to be “like us”? An answer to this question would do more than anything else to reveal how (in)significant we really are.
Scharf’s book is an amazingly thorough, yet accessible, exposition of our knowledge of the formation of the universe and the evolution of everything in it. He doesn’t shirk on the detail, but it never feels like you’re being inundated with minutiae. Rather you feel as if you’re being led by the hand through the forest, discovering new trees and lush vistas at every turn in a series of “wow” moments where each step on the journey nevertheless feels like a logical consequence of the one before.
As I neared the end of the book, I worried that I would be presented with some wishy-washy conclusions or rampant extrapolations. But my concerns were unfounded. Instead, the punchline of Scharf’s exploration of our place in the cosmos reminded me of an anonymous quotation that has haunted me ever since I read it when I was a teenager: “You are absolutely unique, just like everybody else.” Or, as Scharf puts it, we are “special but not significant, unique but not exceptional”. With these phrases Scharf succinctly summarizes the intrinsic conflict between the fact that some of our circumstances are indeed special (in the sense that, had they been otherwise, life as we know could never have existed) and the fact that, according to the Copernican principle, we should expect to be generic.
Crucially, Scharf also tackles the important question of not only what we know, but what is knowable. If our species had developed under an atmosphere clogged with opaque gas, he notes, we would never have seen any stars, and it would have been much harder (though not impossible) for us to discover the nature of the universe around us. Indeed, if we had developed at another time and place in the evolution of the universe, we might have had still more fundamental limitations on our knowledge. In the distant future, the universe will have expanded so much that our descendants, if we have any, will no longer be able to see any other galaxies, and the afterglow from the Big Bang will have faded into nothingness. At that point, it will be pretty much impossible for an intelligent being to learn that it exists in an expanding universe that originated in a Big Bang. All of which makes one wonder: what questions are we neglecting to ask because our circumstances have never prompted them? This may be the ultimate limit to discovering our cosmic (in)significance.
The European Geosciences Union (EGU) is the professional body for, erm, European geoscientists, so naturally its blog network is home to a bunch of blogs about geoscience. The network began in 2012 with just three blogs: GeoSphere (general geosciences), Green Tea and Velociraptors (palaeontology) and Geology for Global Development (social and policy issues relating to geology and natural risks). Since then, it has added five others, including several with close links to physics.
Who is behind it?
Most of the bloggers in the EGU network are early-career researchers or PhD students. The author of GeoSphere, for example, is Matt Herod, a PhD candidate in isotope geochemistry at the University of Ottawa, Canada. One of the newer blogs on the network, Polluting the Internet, is written by an atmospheric scientist, Will Morgan, who is now a postdoc at the University of Manchester. Two other EGU blogs, Geology Jenga (interdisciplinary topics) and Between a Rock and a Hard Place (planetary and earth sciences), have multiple authors, all of whom are (or were until recently) PhD students in the geosciences. The exception to the rule is An Atom’s-Eye View of the Planet, which focuses on how atomic-scale behaviour helps determine the Earth’s physical and chemical properties. Its author is Simon Redfern, a professor of mineral physics at the University of Cambridge.
How often are these blogs updated?
Individually, not that often, which is why we’ve grouped them together rather than writing about each of them separately. Collectively, though, the EGU authors usually produce one or two posts a week, and the main network page pulls in the most recent posts from all eight blogs. Hence, if there are lots of areas of geoscience that tickle your fancy (or if you don’t mind scrolling past the ones that don’t), the network page is the one to add to your bookmarks. And remember, quantity isn’t everything: the network’s least-active blog, Four Degrees, has been updated less than once a month since its 2013 founding, but each post is a long, richly illustrated and copiously cited essay on an important topic in environmental science, energy or policy.
Can you give me a sample quote?
From a December 2014 post on GeoSphere about “a very near miss by the Italian justice system” regarding a group of geochemists from the University of Siena who carried out an environmental study of two military firing ranges: “One of the goals…was to determine if DU [depleted uranium munitions] had been used. On the face of it the task seems simple enough: analyse soil, plants and water for uranium and its isotopic ratio and other potential contaminants from the munitions range (of which are there many). However, the complicating factor in all of this is the fact that adjacent to the firing range is an abandoned mine site called Baccu Locci. So the real question then becomes, which is it? Mine waste or DU or other military contaminants? Their findings were that there was no contamination from DU in the region. These results met with extreme opposition from the local prosecutor who acted on the advice of a nuclear physicist from the University of Brescia who felt that geochemistry was not the proper way to investigate this problem and that the University of Siena scientists were hiding something. The geochemists were charged with two crimes in connection with their results.”