High-energy neutrinos detected by the IceCube experiment in Antarctica are equally distributed among the three possible neutrino flavours, according to two independent teams of physicists. Their analyses overturn a preliminary study of data, which suggested that the majority of the particles detected were electron neutrinos. The latest result is in line with our current understanding of neutrinos, and appears to dash hopes that early IceCube data point to “exotic physics” beyond the Standard Model.
Located at the Amundsen–Scott South Pole Station, the IceCube Neutrino Observatory is a large array of photodetectors buried in ice. In late 2013 IceCube revealed that it had captured the first signals from neutrinos with extremely high energies, which suggests that the particles came from outside of our galaxy. While neutrinos generated inside the Sun and by cosmic rays colliding with the Earth’s atmosphere have been detected for many years, neutrinos from much farther away had remained elusive. As a result, the discovery was named the Physics World Breakthrough of the Year in 2013.
Neutrinos come in three different types or “flavours” – electron, muon and tau – and change or “oscillate” from one type to another as they travel across long distances. For neutrinos that have travelled arbitrarily large distances, we expect to see nearly equal numbers of each flavour when they reach Earth – that is, an electron:muon:tau ratio of about 1:1:1. Depending how the neutrinos were produced, there will be small deviations from this equal-flavour distribution. This deviation should give us information on how and where the neutrinos were produced.
Abundance of electron neutrinos
In 2014 Olga Mena, Sergio Palomares-Ruiz and Aaron Vincent of the University of Valencia in Spain did an independent analysis of IceCube data from 2010 to 2012, and concluded that the best-fit flavour ratio was 1:0:0 – an abundance of electron neutrinos with no muon or tau neutrinos present. If true, this unexpected result would mean that rare neutrino decays were taking place or that the detected particles were mixing with a fourth and very hypothetical “sterile” neutrino. In both cases, the discovery could have pointed to exotic physics beyond our current understanding.
IceCube detects neutrinos of all three flavours when they produce small showers of particles as they interact within the detector. However, muon neutrinos and a small fraction of tau neutrinos also produce a highly energetic muon that is visible as a track as it travels across the entire detector. “So if we observe such a track, we can tell an event was either a muon or tau neutrino, but not which,” says Gary Binder, who is a physicist at the University of California, Berkeley, and part of the IceCube collaboration. He explains, “We can’t tell for certain what flavour produced a given event, but we can do a statistical analysis on the distribution of showers and tracks to estimate the abundance of each flavour.”
Now, however, Binder and IceCube colleagues have analysed a much larger set of IceCube’s data, collected across 974 days from May 2010 to May 2013. They have identified 137 neutrinos with energies above 35 TeV and found that the neutrinos are equally distributed among the three flavours. “No matter what parameters we adjust, there’s just no way to get all electron neutrinos, so if we did observe that, it would surely indicate new physics affecting cosmic neutrinos as they travel over long distances,” says Binder.
A similar analysis was performed by an independent group in Italy, led by Francesco Vissani and Andrea Palladino at Gran Sasso Science Institute in L’Aquila and the Gran Sasso Laboratories in Assergi. This group focused on neutrinos with energies above 60 TeV, and it also found the data to be consistent with conventional astrophysical models.
Unravelling mysteries
Binder told physicsworld.com, “So far, the flavour ratio is consistent with the equal-flavour assumption, and we don’t have enough precision to measure the small deviations that could tell us about the astronomical objects producing cosmic neutrinos, but we expect that to change very soon.” He adds, that they “anticipate the information we gather from flavour studies will help us to resolve the mystery of how these neutrinos are produced and where they come from”.
Binder also points out that there are a number of exotic ideas that predict much larger deviations from the equal-flavour expectation. “Measuring the flavour ratio could give us clues to physics beyond the Standard Model, but so far we haven’t seen evidence for anything exotic yet,” he says.
Vissani adds that IceCube also measures the energy distribution of the neutrinos and this provides yet another important clue about their sources. He points out that “future IceCube analyses can show beyond any doubt that these neutrinos come from cosmic sources, by observing for the first time a tau neutrino”.
The cosmos hides its origins well. For the first 380,000 years of its existence, the entire universe was an opaque stew of hot plasma, full of light and matter. The opacity of this “primordial soup” prevents us from directly observing many important moments in cosmic history, including the first instants after the Big Bang, when the universe may – or may not – have expanded rapidly, increasing its size many-fold in a hypothetical period known as inflation. Yet all is not lost. Observing the very early universe is akin to watching a shadow theatre: while the true action takes place behind a screen, we can nevertheless infer what’s going on by the images cast.
According to many theories, the CMB should contain signs of the universe’s inflationary infancy
In this metaphor, the images are the light known as the cosmic microwave background (CMB). When the cosmos became a transparent gas of atoms, the light from the primordial soup was freed to travel all the way until it reaches us in the present, providing a very clean view of the first years of cosmic history. Better still, imprinted in the CMB are hints of what came before. Fluctuations in the CMB tell us, for example, the geometry of the universe and how much ordinary and dark matter it contains. And according to many theories, the CMB should also contain signs of the universe’s inflationary infancy.
In 2014 scientists working on the BICEP2 (Background Imaging of Cosmic Extragalactic Polarization) telescope announced they had seen evidence of inflation in the CMB. This claim was met first with enthusiasm, then with scepticism as data from the space-based Planck observatory demonstrated that the signal interpreted as inflation could, in fact, be entirely accounted for by dust molecules in the Milky Way. It’s as though the shadow-theatre screen were dirty, making it hard to tell what’s really going on behind it.
But while it’s wrong to say that BICEP2 discovered evidence for inflation, it’s just as wrong to say that Planck ruled it out. Dust never settles for long on cosmology, and now that researchers have a much better idea of what the observational challenges are, they are hard at work addressing them. “Sometimes people express more pessimism than I think that they should, because there are many experiments going forward,” says Renée Hložek, a cosmologist at Princeton University. The experiments Hložek mentions include an upgrade to the South Pole-based BICEP2 (called, a bit unimaginatively, BICEP3); the Atacama Cosmology Telescope polarization project (ACTPol) in Chile; and a balloon-borne telescope called Spider, which flew high in the Antarctic sky in January.
These experiments are designed to compensate for the weaknesses of the Planck and BICEP2 observations. Between them, we should know within the next few years if there really is any sign of inflation hiding in the CMB data, as observations from all of them are combined and compared to give us our clearest view yet of the shadow-play that is the early cosmos.
Inflation’s ‘smoking gun’
Despite their different characteristics, all of these next-generation experiments are looking for the same thing. According to inflationary theory, the extremely rapid expansion during the first tiny split second after the Big Bang – during which the universe grew perhaps by as much as a factor of 1060 – should have created primordial gravitational waves, which are turbulent churnings of the structure of space–time itself. Those waves stretched and compressed the cosmic plasma, affecting the way it interacted with the light infusing it. The result: a subtle but distinctive curl in the polarization of CMB light. Because the form of the polarization looks like the twisting of a magnetic field, or B-field, the gravitational-wave polarization is known as a “B-mode”.
Generally speaking, though, polarization is just the orientation of the electric field in the electromagnetic wave that is light, and many different things can affect that orientation. And therein lies the rub. Light from the CMB took 13.8 billion years to travel to us, and it encountered a lot of things on its journey: galaxies, clusters of galaxies and finally dust inside our own Milky Way. As researchers with the Planck observatory discovered to the dismay of many, light emitted from spinning molecules in galactic dust can mimic the polarization patterns from primordial gravitational waves. If that wasn’t bad enough, the joint analysis of Planck and BICEP2 data showed that dust could possibly account for the entire reported inflationary signal.
But could is the operative word. “It’s now known how much dust is there, so we know what we have to deal with,” says Arizona State University cosmologist Sean Bryan. That knowledge helps, because galactic dust glows more brightly in some frequencies than others. And whereas BICEP2 operated primarily at a single frequency of microwave light, the next wave of observatories scans the sky at multiple frequencies. By comparing polarization data taken at frequencies where the dust signal is strongest to data taken where it’s least important, researchers can effectively subtract its effects. That leaves – maybe – the telltale sign of primordial gravitational waves.
Hot and cold The Atacama Cosmology Telescope (left) collects data from the Chilean desert. Its detector must be chilled to a fraction above absolute zero using a cryostat (right) to see light from the cosmic microwave background. (Courtesy: ACT/Princeton)
Many eyes in the sky
Bryan has reasons to be excited by that prospect. As a graduate student at Case Western Reserve University, he helped develop the instruments for Spider, the airborne experiment that launched near Antarctica’s McMurdo Station on New Year’s Day this year. (“Spider” was originally an elaborate acronym of the type favoured by cosmologists, but it became the experiment’s official name as the mission evolved.) Suspended beneath its building-sized balloon, Spider travelled through the upper atmosphere for 16 days, scanning the sky with a telescope 30 cm across. After the researchers retrieved the data from the ice, they got to work analysing it, and they hope to announce their findings by this autumn.
Spider uses two sets of detectors that operate under the same principle as many 3D movie glasses: one set is sensitive to vertically polarized light, while the other picks up horizontal polarization. But instead of combining two slightly different images to create the illusion of depth, researchers with Spider use images from each detector to reconstruct the variations in polarization across the sky.
BICEP3, for its part, operates on very similar principles. As Bryan points out, the successor to BICEP2 is nearly the same instrument as Spider: the telescopes are the same, they scan the same frequencies (90, 150 and 220 GHz) and they were built with identical detectors. In both instruments, these detectors are chilled to a fraction of a degree above absolute zero; because the light from the CMB corresponds to a temperature of 2.7 K, the type of detectors in Spider and BICEP have to be colder than that to see anything at all, and the colder they are, the more accurate their results will be.
Flying high The balloon-borne Spider telescope collected data from high above Antarctica. Its onboard GoPro camera caught this scene of Earth and space. (Courtesy: The Spider team)
The main difference lies in where they are designed to operate. Spider flew approximately 34 km above the ground, where the air pressure is 0.1% of what it is at sea-level. While it did not travel to space per se, photos from Spider’s onboard GoPro camera show a space-black sky above, even as the ground below is in full Antarctic summer sunlight. BICEP3, meanwhile, is located on the ground at the South Pole. As Bryan points out, ground-based telescopes must account for the small amount of light coming from the atmosphere itself and subtract it (much as visible-light telescopes must do to compensate for stars “twinkling”); Spider flew sufficiently high that it didn’t need to perform that subtraction.
The Atacama Cosmology Telescope, on the other hand, is an entirely different beast. A 6 m-diameter instrument, it operates at five frequencies (30, 40, 90, 150 and 230 GHz, overlapping the BICEP3 and Spider range) and Hložek explains that its mission is quite broad. As well as looking for signs of inflation in the CMB, it will also investigate topics such as what the first stars were like and how the universe shifted from the neutral, opaque primordial soup to the thinner, mostly ionized environment we observe today.
But while the search for primordial gravitational waves makes up only part of the ACT’s mission, it is nevertheless in a prime position to contribute to the hunt. The ACT points at a fixed angle relative to the ground, but it can rotate from side to side, and since different parts of the sky come into its field of vision over the course of a year, it will eventually map a much broader region than either BICEP3 or Spider in microwave light. The ACT’s location in the high, dry Atacama Desert also gives it a different swath of the sky to observe compared with the Antarctica-based Spider and BICEP3, but crucially, it also overlaps, in part, with the relatively small BICEP3 field of view. “It’s actually really important to have multiple experiments looking for slightly overlapping multiple ranges of the sky, because that allows us to independently check stuff,” says Hložek. The cosmic-dust component of BICEP2’s signal, she notes, was first identified by comparing Planck and BICEP measurements of the same patch of sky.
Best and worst cases
This suite of well-designed experiments makes Hložek optimistic that we’ll find the polarization footprint of inflation, if it’s there to be found. University College London cosmologist Hiranya Peiris is among those eagerly awaiting the results. Peiris is deeply involved in both the theoretical and observational sides of the quest to understand the very early cosmos, and she is particularly interested in inflation, which she describes as less a theory than a whole set of mathematical models. While inflation predicts that there will be primordial gravitational waves, she explains, different models of inflation predict different amplitudes. That means it’s important to have specific predictions that can be tested against real data from the observatories, Peiris says – but it also means that the polarization signal from inflation could be too weak for even the next generation of experiments to detect it.
In many ways, that would be the worst-case scenario for inflation researchers, because the lack of a clear signal wouldn’t rule out inflation entirely – it would just keep us in doubt for longer about what happened in the moments after the Big Bang. Even in that case, though, Peiris still sees a benefit. “Just to rule out that class of models that gives you the high-scale inflation is a very significant achievement in itself,” she says. And in the best-case scenario, she adds, studies of cosmic inflation would give us a direct view into the kind of “very very high-energy physics that we can’t probe in the laboratory”. With that prize in mind, Peiris says that if problems such as cosmic dust limit the progress that can be made from the ground, “there is a call here to think about the next-generation space-based experiment”.
When faced with the challenges of observation, Bryan keeps in mind how marvellous it is that we’re studying the early universe at all. “Stepping back, it’s really exciting to make a measurement that even relates to this,” he says. Our ultimate cosmic origins may be hiding now, but the cleverness of cosmologists may yet pull aside the shadow-theatre screen to reveal inflation – or something unexpected.
The first images of thunder, created by visualizing the sound waves created by artificially triggered lightning, have been taken by an international team of researchers. The novel experimental approach provides an entirely new way of investigating lightning, which may help to answer outstanding questions about the physics that underlies this intense natural phenomenon.
Although bolts of lightning strike the Earth more than four million times each day, much of the specific physics behind this process remains a mystery. “While we understand the general mechanics of thunder generation, it’s not particularly clear which physical processes of the lightning discharge contribute to the thunder we hear,” says group leader Maher Dayeh, of the Southwest Research Institute in the US, who developed the new method together with colleagues in Australia and the US. Dayeh explains that some outstanding questions include how lightning is initiated, what controls its movement through the atmosphere, and how it strikes objects near the ground.
Forked leaders
Lightning strikes begin with the build-up of electrostatic charges in storm clouds – these form channels of negatively charged, ionized air, or “leaders”, which fork downwards. When they reach the ground, these leaders create a bridge of low resistance, through which discharge can occur – with positive charge racing up the channel in a series of nearly instantaneous return strokes. “When lightning does strike something, very large currents flow, heating up the channel to about 27,760 °C. The hot channel rapidly expands, making the thunder that we hear and can measure,” explains Joseph Dwyer, of the University of New Hampshire, who is part of the team. He adds that because “the thunder is created close to the time that all this is happening, it provides a window into what’s going on when lightning strikes”.
As lightning’s unpredictable nature makes it difficult to study in the field, the researchers conducted their experiments on artificially triggered strikes instead. To generate lightning, the team launched small rockets into storm clouds as they passed overhead. The rockets had long, trailing copper wires attached – these provided a conductive channel through which lightning would predictably strike – onto which the team could focus their instruments.
Lightning rockets
The rockets were launched from the International Center for Lightning Research, which is based in Florida in the US. The centre’s location takes advantage of the state’s record high frequency of lightning strikes, with the geography of the Florida peninsula promoting the formation of warm, humid updrafts, which generate very active thunderclouds at high altitudes.
To record the acoustic signature of thunder, Dayeh and colleagues designed a large array of 16 microphones, each spaced one metre apart. These were lined up 95 metres from the launch pad where the lightning would hit. Following each strike, post-processing and directional-amplification techniques were used to convert the recordings into a vertical acoustic profile of the lightning bolt. With sound waves from higher up in the atmosphere taking longer to reach the receivers, each return-stroke signal has a characteristically curved appearance. In their study, the researchers imaged strikes with at least nine separate return strokes and found that the loudest part of the thunderclap comes from where the bolt meets the ground.
Craig Rodger – a physicist at the University of Otago in New Zealand who was not involved in this study – commends the work for offering a new way to investigate lightning, which he expects will lead to interesting new knowledge. “Even now, we are learning totally new things about lightning and the processes that take place during a discharge,” he says.
With their proof-of-concept study complete, Dayeh’s team is now working on refining its technique to better explore the outstanding mysteries around lightning generation. One potential avenue of investigation, for example, would be in triggering lightning with a more natural, zigzag appearance – rather than the straight form created by the wires in the current set-up. From this, they might distinguish between the different components of lightning – current pulses, discharge-channel zigzags and step-leader branches – and analysing their acoustic signals independently.
The research was presented earlier this week at the 2015 Joint Assembly, and is due to be published in Geophysical Research Letters.
Julie Peasley, creator of the Particle Zoo. (Courtesy: CERN)
Peter Woit is lauded by some for having the courage to speak the truth to the physics establishment, while others see him as an enemy of science. Woit writes the Not Even Wrong blog, which has the same title as a controversial book he once wrote about the merits of string theory. In an article in the latest issue of Nautilus, Bob Henderson profiles Woit and his three decades of doubt over various incarnations of the theory that culminated about 10 years ago in the “string wars”. Henderson’s article is called “The Admiral of the String Theory Wars” and provides a fascinating insight into how the rise of string theory caused Woit to switch from physics to mathematics and his relationships with string theorists – some of whom work in the same building as Woit at Columbia University.
The telescope marked the 400th anniversary of Galileo’s first telescope, which he presented to policy-makers from the Venetian Republic on 25 August 1609.
Biocompatible silicon nanoneedles, which can efficiently deliver nucleic acids and nanoparticles into biological cells without damaging them, have been developed by an international team of researchers. The porous needles are capable of delivering these drugs into live cells that are normally difficult to penetrate, and the technique could help damaged organs and nerves to repair themselves, and could also act as intracellular pH sensors.
The researchers, based at Imperial College London and the Houston Methodist Research Institute in Texas, made their nanoneedles using photolithography techniques. The structures can be patterned onto standard silicon chips in different ways, and the length and width of the needles can also be adjusted. Because they are porous, they can be made to take up a significantly greater amount of nucleic acid, nanoparticles and other therapeutics. Importantly, the porous silicon from which they are made is biocompatible – unlike ordinary silicon – and it clears the body in about two days, without leaving behind any toxic residue.
The plasma membrane and “endo-lysosomal compartment” of a cell are major biological barriers that limit the therapeutic efficiency of many drug-delivery vehicles by preventing nanostructures from entering the cells. According to team member Ennio Tasciotti from the Department of Nanomedicine at the Houston Methodist Research Institute, the new nanoeedles can “successfully deliver nucleic acids into cells, bypassing their plasma membrane and endo-lysosomal compartments without damaging the cell”.
New vessels
The researchers, co-led by Molly Stevens of Imperial College, found that their nanoneedles could be used to deliver nucleic-acid DNA and quantum dots into live human cells in the laboratory. They also found that they could deliver nucleic acid into the back muscles in mice. After just a week, they noticed that new blood vessels had grown in the animals’ muscles, and that these vessels continued to form over a further two weeks. The technique did not cause inflammation or any other harmful side effects.
Nucleic acids are the building blocks of all living organisms – they encode, transmit and express genetic information. If delivered into live cells, using the nanoneedles, for example, they could re-programme cells to make them carry out various functions. Such genetic programming could allow for personalized medical treatments for patients in the future.
Delivering quantum dots
The nanoneedles can release nucleic acids to cells – a process that is often difficult. At present, gene reprogramming and neuronal gene transfer is usually done with retroviral vectors, a technique that is complicated and expensive. Quantum dots – tiny specks of semiconductor material only a few molecules in size – can be used to monitor microscopic processes including those occurring inside biological cells. They are easy to track inside a cell because they brightly fluoresce, but getting them into a cell in the first place is not easy. The new nanoneedles could help to overcome this problem.
‘Flexible bandages’
In the future, the team hopes that its nanoneedles could be used to treat damaged nerves and promote nerve reconstruction. Stevens says that she and her colleagues are now hoping to combine their nanoneedles with various biomaterials to make “flexible bandages” that could be applied to different parts of the body, either internally to the tissue of interest or externally onto the skin. These bandages would deliver the nucleic acids needed to repair and reset cell programming. Although still a long way off, such bandages could ultimately help to repair damaged tissue. They might also be doped with metals to become conductive and make implantable restorable electronics.
Bright spark: Giana Phelan of OLEDWorks shows-off some of the company’s wares.
By Robert P Crease in New York
“One well-lit place” is the best way to describe the exhibition hall at Javits Center in New York when it opened on Tuesday morning. I fully expected to be bedazzled at every turn because the venue is hosting LIGHTFAIR, the world’s largest lighting technology trade fair, and so the hall is packed with more than 600 booths designed to highlight, so to speak, the world’s lighting revolution.
Walkers on the Cleveland Way footpath in the north-east of England get to enjoy not only the heather-covered North York Moors but also some stunning coastal scenery. Nowhere are the views more dramatic than at Boulby Cliff – the highest point on the east coast of England at a shade over 200 m above sea level. What most walkers won’t realize, however – as they wander the cliff-tops and breathe in the fresh North Sea air – is that more than a kilometre beneath their feet exists a hive of human activity. In the cavernous excavated tunnels far below, by the light of their headlamps, hundreds of people go about their day-to-day business.
Boulby Mine was established in the late 1960s to take advantage of rich seams of rock salt and potash – a soluble fertilizer containing potassium. The mine has been productive ever since, and now consists of a network of roadways and caverns extending out under the sea. More than 1000 km of tunnels have been excavated since operations began.
But since the early 1990s, miners have shared their workplace, as well as their commute – a 1.1 km vertical journey in a rattling lift cage – with physicists. That’s because, as Israel Chemicals Ltd UK (ICL-UK), the company operating the mine, proudly states on a sign up at ground level, the site is home to the “Boulby Underground Laboratory for Dark Matter Research – searching for the missing mass of the universe”. This facility is funded by the UK’s Science and Technology Facilities Council (STFC) and operated by a small onsite STFC team. The underground lab infrastructure has evolved over the years. A series of buildings has been constructed in specially excavated rock-salt caverns in the mine, culminating in the most recent building, the Palmer Laboratory – a 750 m2 fully outfitted cleanroom underground science facility.
Astroparticle physics research at the Boulby lab is thriving, and has focused since the turn of the millennium on searches for dark-matter particles. Boulby, like other underground labs in the world, is an ideal venue for looking for these elusive dark-matter particles as experiments can be operated almost entirely free from cosmic-ray-particle interference – a perpetual source of unwanted particle noise on the Earth’s surface. Early studies at Boulby included the ZEPLIN dark-matter detector, which pioneered a detection system that uses liquid xenon as the dark-matter “target”. This technology is now one of the most important in the world for research into dark matter, one of the leading candidates for which is weakly interacting massive particles (WIMPs). The most recent dark-matter experiments under way at Boulby include DRIFT-II, which aims to detect not only the energy of dark-matter particles but also their direction – the so-called “WIMP wind”. Another is DM-Ice, a dark-matter detector due to be installed at the South Pole, which aims to detect annual variation in WIMP signals caused by the motion of the Earth around the Sun, confirming (or refuting) an earlier positive result from a detector operated in the deep-underground Gran Sasso National Laboratory in Italy. (For more on direct dark-matter searches, see “Deep down for dark matter” below.)
Recently though, the range of studies under way at Boulby – and in other deep labs around the world – has been evolving and expanding. Many groups beyond particle physics have realized that these environments would benefit their research too, which has led to an explosion of funding proposals, followed by the diversification of these labs. Projects currently under way at Boulby include astrobiology, testing instrumentation for a new generation of robotic rovers and developing techniques to monitor buried carbon-dioxide (CO2) gas in future carbon capture and storage (CCS) schemes.
Each deep-underground lab has a unique offering depending on its location and geology, and laboratories are seeing a growth in the science they host beyond the usual astroparticle physics. At the Canfranc Underground Laboratory in Spain, for example, scientists are exploring the link between seismic activity and river discharge, and recently found that a certain portion of seismic noise measured there is indeed linked to the discharge of a local alpine stream, the River Aragon. At Gran Sasso, meanwhile, physicists are dating ice cores to high precision using low-background germanium detectors, which can detect faint gamma-radiation signatures corresponding to atmospheric nuclear tests and nuclear-reactor accidents.
Extraterrestrial aims
Boulby is at the forefront of this relatively new interest in diversifying the science undertaken by underground labs, with a host of exciting projects already under way or planned. The similarity between the underground environment at Boulby and the extraterrestrial subsurface environment of, for example, Mars, makes Boulby an ideal location for testing a new generation of troglodytic rovers. Such rovers, which are designed to navigate and explore remote and alien environments, could one day beam back science data from deep below the Martian surface.
A new European space-exploration programme called MASE (Mars Analogues for Space Exploration) is associated with this research. Scientists from MASE as well as NASA are studying life deep underground and are testing a range of technologies to look for it. This work is carried out in rock-salt caverns at Boulby, some of which are located many kilometres away from the Boulby mine shafts, remote and deep under the North Sea. “If we want to successfully explore Mars,” says Charles Cockell, director of the UK Centre for Astrobiology, “we need to go to Mars-like places on Earth. The deep, dark environment of Boulby Mine is the ideal place to understand underground life and test space technologies for the exploration of Mars.”
Going underground The Boulby Mine in Yorkshire, UK, houses a great variety of scientific activity, including Mars rover instrumentation research (left) and ultralow-background gamma spectroscopy and materials screening (right). (Courtesy: UK Centre for Astrobiology/MINAR; Boulby Underground Science Facility)
New instruments being developed for these troglodytic rovers include rock-breaking tools for cracking open the secrets of Martian geology, and miniaturized gas-analysis instruments designed to sniff out gases such as methane, which are the chemical signatures of life. This research could also be applied closer to home, in working mines, for example by revealing the presence of dangerous gases, or in the robotic exploration of collapsed mine tunnels deemed too dangerous for humans to enter. Indeed, one of the key goals of the Boulby International Subsurface Astrobiology Laboratory (BISAL) is to provide a platform for knowledge transfer between the space exploration and mining communities.
A related arm of research at Boulby is the rapidly expanding field of astrobiology, which has been motivated for the past two decades by the study of exoplanets as well as planetary bodies in our own solar system. We currently have a catalogue of around 1900 known planets, a handful of which appear to be within the so-called “habitable zone”, where the temperature and pressure allow liquid water to exist on the planet’s surface, including (obviously) Earth, and (perhaps less obviously) Mars. Exoplanets in the habitable zone may be habitable, but that’s not to say they are hospitable, and there is growing interest in studying terrestrial life here on Earth that has eked out an existence in extreme environments that may mirror those found on other planets.
Extreme environments include those at the edge of the habitable zone’s temperature–pressure envelope, as well as low-background-radiation environments and very salty environments. An example of the latter is found at Boulby, where the mine tunnels are carved into a layer of 250-million-year-old evaporite rock known as the Zechstein Supergroup. This unit of sedimentary rock, which includes minerals formed by the evaporation of a saline solution, contains similar minerals to those recently detected on Mars.
This is where the extremophiles being studied can be found – hardy microscopic biota, with a slow metabolism, living on the salty rock surfaces. Astrobiologists take samples from the rock faces and study them in situ in the BISAL clean room, making Boulby perhaps the only place in the world where such extremophiles can be studied with such minimal risk of contamination.
Monitoring carbon capture
Understanding our own planet is also high on the research agenda at Boulby. One process we are increasingly using here on Earth, but do not yet fully understand, is CCS. This is the practice of capturing CO2 produced by burning fossil fuels and then injecting it underground so that it doesn’t add to greenhouse-gas levels. CCS has huge potential to ameliorate anthropogenic climate change, provided that the injected CO2 remains locked away rather than leaking back into the atmosphere.
Project Deep Carbon at Boulby is developing muon detectors to monitor CO2 stored in deep saline aquifers – one of several types of storage sites being used for CCS, in which the gas displaces salt water in a layer of permeable rock. The muon-monitoring technique being used is analogous to medical CT scans: just as X-rays are used in CT scans to make non-invasive 3D images of a patient’s insides, cosmic-ray muons can be used to non-invasively image anything between their point of origin in the upper atmosphere and a detector on or under the ground. The ultimate goal is a compact, rugged muon detector that can be inserted into a borehole beneath an aquifer, where it would then monitor the muon flux it receives. Such measurements could then provide a much-needed means of tracing the movement of CO2 in the CCS process since, for instance, when CO2 is injected into the rock pore space it will displace brine which has a higher density than CO2 and is better at blocking muons. At the moment such data can only be acquired at high cost and episodically through techniques such as seismic surveys. Jon Gluyas, project leader and professor of geoenergy carbon capture and storage at the University of Durham, UK, says “Muon tomography offers an opportunity for an important additional passive 24/7 monitoring system that could also significantly cut costs.”
The location of Boulby Mine provides a very good facsimile for where a muon detector will be placed in practice – in a deep borehole below lots of rock, liquid and gas. Indeed, in the tunnels in Boulby Mine that extend beneath the sea, a detector would have above it plenty of rock, as well as water.
Project Deep Carbon is currently in “proof of principle” phase, with two studies under way: a borehole-positioned detector being in situ performance-tested in a rock wall near the underground laboratory; and the Muon Tides detector, soon to be installed in a remote cavern 764 m below sea level, where it will demonstrate the sensitivity of the technique by measuring the tiny change in muon flux caused by the ebb and flow of the tide. There the instrument will monitor the 50 m deep water above and, by integrating data over time, researchers hope to be able to detect the twice-daily 3 m change in tide.
Materials dating and screening
Another unique opportunity created by the tiny cosmic background levels in underground labs is the ability to measure, using gamma-ray spectroscopy and other techniques, ultralow levels of radioactivity emitted by test materials or samples, with greater sensitivity and precision than would be possible on the surface.
One use of gamma spectroscopy in which a low gamma-ray background is needed is dating environmental samples. Many readers will be familiar with the technique of radio carbon dating, in which the ratio of unstable 14C atoms to those of stable 12C is used to date samples up to 50,000 years old. Perhaps less well known is the fact that similar techniques applied to the radioisotopes 210Pb and 32Si provide a means for more precise short- and mid-range radiometric chronometry. Using gamma-ray spectroscopy and other techniques to measure the decay of these isotopes, researchers can apply radio-dating techniques over different time periods. Applications include health – linking risk factors with certain diseases in epidemiology studies – sediment dating and the dynamics of various environmental processes.
Tunnel vision Strain-measuring component of the GEODYN facility at the Canfranc Underground Laboratory in Spain – exploring the link between seismic activity and river discharge. (Courtesy: Canfranc Underground Laboratory)
Another use of ultralow-background gamma spectroscopy is to screen materials to be used in “rare-event physics” experiments. Many of the rare-event searches that have traditionally dominated the science programmes of underground labs are still under way, and many of the questions they set out to answer (Does neutrinoless double-beta decay occur in nature? Does the proton decay?) remain open and important to this day. The latest incarnations of these experiments are collecting data as you read this, and are continuing to push down the limits on the probabilities – known as cross-sections – of the rare particle-interaction processes that researchers hope to observe.
These cross-sections are now incredibly tiny, and this means that even a very low rate of background events from, for example, radiological impurities in the detector materials, could severely hamper experimental sensitivity. A rigorous materials-screening programme is therefore implemented by all such experiments. The lab at Boulby, with its 1.1 km of muon shielding, low-activity halite cavern, and extremely low levels of the radioactive gas radon, is an ideal place to do this.
Scientists at Boulby are currently checking materials for several underground-lab experiments, including the LUX–ZEPLIN (LZ) dark-matter search experiment: the seven-tonne successor to the LUX experiment, due to enter its three-year construction period at the Sanford Underground Research Facility in the US this year. “The core of LZ will be the most radiologically quiet place on Earth at these energies,” says Chamkaur Ghag from the LZ group at University College London in the UK.
Other experiments for which Boulby is screening materials include the SuperNEMO neutrinoless-double-beta-decay project, which is to be constructed at the Modane laboratory on the Italian–French border. The facilities are being used by firms as well, to screen materials or devices – such as low-activity metals and photomultipliers – that they intend to supply to current and future rare-event projects.
Time to expand
The future looks bright for the world’s underground labs, with already diverse science programmes set to expand in the coming years as more research groups realize the possibilities offered by these uniquely quiet corners of the universe.
At Boulby, the science programme is expanding and so are the facilities. The STFC has recently granted £1.8m to build a brand new underground lab adjacent to the existing one, to host science at Boulby for the next decade. As well as providing a site for multidisciplinary studies like those described in this article, the new lab will also host and support the UK’s efforts in the world’s next-phase dark-matter-search experiments.
The Palmer Laboratory at Boulby was built to fit into an existing tunnel. It is now nearly 15 years old and shows wear and tear from gradual rock movement caused by its proximity to a nearby geological fault. The cavern in which the new lab will be located is distant from this fault – and the lab is being custom built to be significantly taller and wider than the Palmer Laboratory. ICL-UK has already completed excavation of the more than 4000 m3 cavern for the new lab and assisted in the initial outfitting, set to be complete by the end of this year. The fact that the firm is applying its own effort to the project is testament to the close relationship enjoyed by the science and mining operations at Boulby, a rare but positive example of a symbiotic marriage of pure (and now applied) science and industry.
The deep labs of today are a far cry from the dusty caverns of the first underground rare-event experiments of the 1970s. The science portfolio of these laboratories is evolving too, and Boulby, along with its international counterparts, is undertaking a growing range of multidisciplinary underground science studies – ushering in a new era of discovery deep beneath our feet.
Deep down for dark matter
Lab low-down The Davis Laboratory at Sanford Underground Research Facility houses LUX, currently the most sensitive dark-matter detector in the world. (Courtesy: Lawrence Berkeley National Laboratory)
In the early days of deep underground labs, researchers typically used sensitive detectors to search for rare astroparticle-physics events such as neutrino scattering, neutrinoless double-beta decay and the decay of the proton. However, at around the turn of the millennium, a new programme of rare-event searches took off in the form of direct searches for dark matter. In a direct search, an experiment looks for direct interactions of the particles themselves, whereas in an indirect search, one looks for the gravitational effects these particles have on other, visible objects, or for what the particles produce when they interact in distant regions of the galaxy.
Thought to make up 85% of the universe’s mass, the particle nature of dark matter still eludes us. Most scientists are pinning their hopes on a class of particles called WIMPs (weakly interacting massive particles), which are expected to leave behind a feeble energy signature on the rare occasions that they interact in whatever sensitive target medium is used in the detectors designed and built to observe them. For this reason, all of the world’s direct dark-matter search experiments are sited in underground labs, where a thick layer of bedrock shields them from unwanted particle interference from cosmic rays, which are ever-present at the Earth’s surface.
The Boulby Underground Laboratory in the UK currently hosts DRIFT-II, which is a 1 m3 chamber filled with low-pressure gas, designed to detect not only the energy but also the direction of dark-matter particles: the so-called “WIMP wind”. Boulby also hosts the emerging DM-Ice experiment – a sodium-iodide scintillator array designed to detect the annual change in WIMP signal rate caused by the motion of the Earth around the Sun and its subsequent change in speed relative to the Milky Way’s dark-matter halo.
In other deep labs, some notable experiments use a technique pioneered by the ZEPLIN collaboration, in which a detector is filled with a noble element such as xenon, which exists both in liquid form at the bottom of the chamber and as gas at the top. The idea behind such two-phase noble-liquid detectors is that dark-matter particles interact in the liquid, releasing scintillation light, and charge, which drifts to the gas phase, causing another scintillation flash. Recording these flashes with high-sensitivity, low-background photomultiplier tubes allows the location of the interaction in the detector to be determined. Comparing the size of the two light pulses also gives a means of identifying which type of particle has been detected – a WIMP, or an earthly background-radiation particle.
Such detectors are currently being used in the XENON experiment at Gran Sasso National Laboratory in Italy and the LUX experiment at the newly renovated Sanford Underground Research Facility in the US, which occupies the very same experimental hall in which the future Nobel-prize-winning physicist Ray Davis Jr first discovered solar neutrinos. At the time of writing, LUX is the most sensitive dark-matter detector in the world.
A new type of metallic state of matter has been discovered by an international team of researchers studying a superconductor made from carbon-60 molecules or “buckyballs”. The team found the new state after changing the distance between neighbouring buckyballs by doping the material with rubidium. The study reveals that the material has a rich combination of insulating, magnetic, metallic and superconducting phases – including the hitherto unknown state, which the researchers have dub a “Jahn–Teller metal”.
Led by Kosmas Prassides of Tohoku University in Japan, the study provides important clues about how the interplay between the electronic structure of the molecules and their spacing within the lattice can strengthen interactions between electrons that cause superconductivity. As well as providing further insights into superconductivity, the research could result in the development of new molecular materials that are superconductors at even higher temperatures.
Superconductors are a large and diverse group of materials that offer zero resistance to electrical currents when cooled below a critical temperature (TC). While superconductivity involves conduction electrons forming pairs, the mechanism by which this occurs is not fully understood in all types of superconductors – especially in high-temperature materials.
Adjusting molecules
Superconducting lattices of fullerides – C60 plus three alkali-metal atoms – have been studied for more than two decades, and provide an interesting test bed. This is because the distance between fulleride molecules – and hence the electronic properties of the material – can be adjusted by applying pressure to the material or doping it with different kinds of atoms.
This latest work involves caesium fulleride (Cs3C60) in a face-centred-cubic lattice with a Cs3C60 molecule at each lattice site. The material becomes superconducting under pressure and below its critical temperature – which rises to 35 K at 7 kbar before falling at higher pressures. By substituting some of the caesium atoms with rubidium atoms, the researchers were able to change the distances between molecules – effectively pulling the molecules closer together in the lattice and so mimicking the effect of applying pressure.
At low pressures the material is an insulator, in which the electronic state of the molecule is distorted by the Jahn–Teller effect. C60 normally has an icosahedral shape that resembles a football, but the presence of the three electrons donated by the caesium makes the molecule look more like a rugby ball.
Rising pressure
As pressure is applied by adding rubidium, the electronic states of the molecules begin to overlap, and the material undergoes a “Mott transition” to become a simple metal. This is a crucial point for understanding superconductivity because it is the metallic phase that becomes a superconductor below TC.
The surprising thing about this metal–insulator transition is that it involves an intermediate state never seen before. The researchers have dubbed this a “Jahn–Teller metal” because when the material is studied using infrared spectroscopy, the fulleride molecules clearly show rugby-ball distortions, which were only known to occur in insulators. However, nuclear magnetic resonance measurements clearly show that electrons are able to “hop” from one molecule to the next – which is the signature of a conducting metal.
“An interesting question is how the material can have both Jahn–Teller distortions and be a metal?” says Matthew Rosseinsky of the University of Liverpool, UK, who was involved in the research.
Unconventional pairs
The team found that when the simple metal is cooled, it becomes a conventional “BCS” superconductor in which the electron-pairing mechanism is well understood. However, when the Jahn–Teller metal is cooled, it becomes an “unconventional” superconductor with an as-yet-unknown pairing mechanism.
The material with the highest TC in the study (about 35 K) was in the region of the transition between the Jahn–Teller metal and the simple metal. The mechanism that causes the electrons to pair is strongest where TC is the greatest, and therefore the mechanism appears to involve interplay between the tendency for electrons to remain on the molecules and the tendency for electrons to move through the material.
Rosseinsky points out that there is an “interesting comparison” between this molecular superconductor and the cuprates – high-temperature superconductors discovered nearly 30 years ago that have proven devilishly difficult for physicists to explain. He says that the copper ions in some cuprates are “Jahn–Teller active species”, and studies of molecular materials – in which the Jahn–Teller effect can be fine-tuned – could give us further insights into high-TC materials.
Elisabeth Nicol of the University of Guelph in Canada agrees, saying that the cuprates were first investigated for their superconducting properties because of their Jahn–Teller properties. Nicol, who was not involved in the new research, adds that “understanding the mechanisms at play and how they can be manipulated to change the TC surely will inspire the development of new [superconducting] materials”.
Is the fine-structure constant different in different parts of the universe? The answer to this intriguing question could be one step closer, thanks to a new way of locating the frequencies of electronic transitions in highly charged ions. The new technique was created by an international team of physicists, and could also be used to identify candidate ions to make new and more precise atomic clocks.
The fine-structure constant (α) defines the strength of the electromagnetic interaction, and observations of the light from distant quasars suggest that it may vary throughout the universe. If these variations are real, α should also change in the laboratory by about one part in 1019 per year, as the Earth travels through the cosmic microwave background. In principle, this variation could be measured in an atomic clock based on an atomic transition that is very sensitive to tiny changes in the fine-structure constant. The problem with this technique is that transitions in neutral atoms or singly charged ions are adversely affected by stray electromagnetic fields and black-body radiation – and to make matters worse, these transitions are not sensitive enough to changes in the fine-structure constant for the measurement to achieve the desired level of precision.
Electron stripping
In 2012 physicists in Australia and the US suggested that atomic clocks accurate to one part in 1019 could be made using electronic transitions in ions that have been stripped of many electrons. This is because removing electrons from an atom leaves the remaining electrons more strongly bound to the nucleus. This, in turn, makes atomic transitions in the ion less sensitive to noise and more sensitive to the fine-structure constant.
The researchers focused on “level-crossing” transitions, where two energy levels swap places as the atomic number increases. At the crossing points, the energy of the transitions is very small and can be excited with an optical laser. Such transitions are very narrow, which makes them perfect for producing an accurate clock. However, it is extremely difficult to calculate the exact frequency at which such a transition occurs – and not knowing the exact frequency combined with the narrowness of the transition makes it extraordinarily difficult to locate in a practical laboratory experiment.
In this latest work, a team of physicists including some of those involved in the 2012 calculations focused on transitions in the ion Ir17+, which are of particular interest because they should be very sensitive to changes in the fine-structure constant as small as 10–20 per year. The team also looked at similar transitions in several other highly charged ions that neighbour Ir in the sixth row of the periodic table. In all cases, they were able to calculate and then observe the frequencies of the transitions of interest.
Painstaking preparation
“These ions that we were studying had never been observed,” says José Ramón Crespo López-Urrutia, who leads a research group at the Max Planck Institute for Nuclear Physics in Heidelberg, Germany, and was part of the team. The researchers therefore had to painstakingly prepare individual samples of these ions before trapping them and exciting them with an electron beam.
The team looked at spectral lines in the light emitted by each set of excited ions, and used a computer algorithm to look for lines in different samples that varied in the expected way, showing that they had all come from the same transition. Using this technique, the researchers assigned almost all of the transitions between the various energy sub-levels. Despite their success, however, they were still left with two possibilities for which spectral line corresponds to the best transition for measuring the fine-structure constant. Using the scaling laws they had developed, the researchers went on to make accurate predictions for the energies of key transitions in two other highly charged ions – Hf12+ and W14+ – that are good candidates for atomic clocks because they are less sensitive to noise.
The researchers now have to cool their highly excited ions to millikelvin temperatures before they can perform precision laser spectroscopy to determine the exact frequencies of the transitions. They are hopeful of success because earlier this month, researchers in López-Urrutia’s group demonstrated such cooling of Ar13+ions.
Laser locking challenge
Wolfgang Quint, of the Helmholtz Institute in Jena, Germany, is impressed with the work. However, Quint, who was not involved with the research, says that even with the millihertz precision achieved by the researchers, much more work needs to be done before they will be able to lock a laser onto a transition and measure α. Mikhail Kozlov, of the Petersburg Nuclear Physics Institute in Russia, adds, “If there is a clock using highly charged ions, I’m not sure that it will be made from one of these ions.” But this method should enable further observation of other transitions in other ions before a final decision about the most appropriate one is made, he says.