Skip to main content

Hunting gravitational waves using pulsars

“Help us with our science. Please turn off all phones and electronic devices.”

That is the firm command that greets visitors when they arrive at the iconic Jodrell Bank Observatory in Cheshire, UK. But you don’t need a notice warning you of the potential problems from your phone’s radio signal to know that you have reached Jodrell Bank. Even before getting to the site itself, it would be hard to miss the 76 m diameter Lovell Radio Telescope – a giant white dish towering above the flat Cheshire Plain.

Radio telescopes have been used at Jodrell Bank for nearly 60 years to study celestial objects, as well as to track rockets, satellites and space probes. But astronomers at the observatory now have a new goal in mind. Using the Lovell Telescope – and others like it around the world – they are hoping to make the first ever direct detection of gravitational waves.

Gravity – the longest ranged of the four fundamental forces – has been shaping our universe since the first atoms were created. It is responsible for everything from determining the large-scale structure of the galaxies to the formation and movements of the planets and the stars. Gravity is also what sends us flying down ski slopes and, occasionally, falling flat on our faces.

But despite our familiarity with the gravitational force, its modus operandi has never been experimentally confirmed. According to Albert Einstein’s general theory of relativity, gravitational waves are effectively ripples in space–time that travel as a wave. However, none have yet been directly detected.

Elusive though gravitational waves may be, there are currently major efforts aimed at directly detecting them. The most familiar method is to use giant L-shaped laser interferometers such as the Laser Interferometer Gravitational-wave Observatory (LIGO) in the US and VIRGO in Italy. These experiments are designed to detect tiny changes in the interference patterns created by laser beams sent down pairs of kilometres-long pipes positioned at right angles to each other. These changes would occur in the presence of a gravitational wave in which space would alternately expand and contract, causing the path lengths of the laser beams to change. But despite the LIGO and VIRGO collaborations joining forces in June to publish a combined assessment of five years’ worth of data from 2005 to 2010 in a paper authored by more than 900 physicists, no sign of a gravitational wave was reported (Phys. Rev. Lett. 113 011102).

Black-and-white photo of a huge radio telescope dish supported on a complex metal structure, next to the crescent Moon in the black night sky

However, an alternative type of experiment – being conducted by a much smaller group of researchers – is also in the running in the hunt for gravitational waves. First dreamt up in the 1970s, it involves pointing radio telescopes at distant objects known as pulsars. For many years this technique was not a viable option because the necessary technology was not yet available. But recent developments – in particular the increased processing power of computers – now place the method as a contender to make a direct detection of gravitational waves. And with a Nobel prize potentially up for grabs to whoever spots one first, the heat is now definitely on.

Stellar timekeepers

Pulsars are spinning neutron stars that are created when a star explodes as a supernova to leave behind what are the second-most compact objects in our universe after black holes – in fact, a teaspoon of neutron-star matter weighs a staggering 100 million tonnes. Rotating at a rate of up to hundreds of times per second, a pulsar emits a beam of particles and light – including strong radio waves – out of each of its magnetic poles, with the magnetic axis usually precessing around the rotation axis such that each beam sweeps out a conical path in the sky. If a pulsar is orientated such that the solar system lies on this conical path, we can use radio telescopes to detect a “blip” of signal each time its beam comes our way. In fact, this regular signal, which we see at a number of different wavelengths, is our only evidence that pulsars exist.

Some of the fastest pulsars that rotate once every few milliseconds are particularly useful tools because the arrival times of the pulses at our telescopes are so reliably regular that they can be used as extremely precise clocks, even rivalling some atomic clocks. “People put two and two together and said, hey, these precise clocks could in theory be used to try and detect gravitational waves,” explains Ben Stappers, a pulsar astronomer at Jodrell Bank.

The idea is that if a gravitational wave passes between the pulsar and us, it would alternately stretch and compress the distance that a pulse of light from the pulsar has to travel before it reaches our telescopes. As light moves at a constant speed, the pulses would take a longer, or shorter amount of time, respectively, to travel to us, resulting in those blips arriving later, or earlier, than if there were no gravitational wave at all. The tiny changes in the relative arrival times of the pulses from several pulsars – known collectively as a pulsar timing array – would therefore firmly reveal the presence of a gravitational wave. Or so the thinking goes.

Leaping ahead

As is often the case in physics, looking for gravitational waves is easier the more data you have, which is why researchers from five radio telescopes in Europe have joined forces to share their data in a collaboration called the European Pulsar Timing Array (EPTA). The joint effort involves the Lovell Telescope at Jodrell Bank, as well as radio telescopes in France, Germany, Italy and the Netherlands, which together look at 40 pulsars visible from the Northern hemisphere.

Even more beneficial than this data sharing is a sub-project of the EPTA called the Large European Array for Pulsars (LEAP), led by astrophysicist Michael Kramer of the Max Planck Institute for Radio Astronomy in Germany. While the EPTA involves the five telescopes taking their own data at completely different times, in LEAP the same telescopes take simultaneous measurements of the 22 best-quality pulsars in the EPTA’s repertoire. While making observations at the same time might seem like an obvious thing to do, it is easier said than done. “Observation time is expensive, and in order to combine and orchestrate such large telescopes at such big distances, you need to have a very serious reason to do so,” says Sotirios Sanidas, a postdoc working on the LEAP project at Jodrell Bank. But because gravitational-wave hunting is such a worthy goal, the LEAP team has managed to secure a simultaneous 24-hour slot on all five telescopes once a month.

With five telescopes rather than one, the main benefit of LEAP is that it simulates a much bigger telescope with a diameter of about 200 m, which is equivalent to the largest radio telescopes currently on Earth. But the team has not engineered this large effective diameter in order to take a better-resolved picture – in fact, it is impossible to image pulsars because they are so small and distant. Instead, pulsar astronomers are interested solely in how much light they can capture.

If an increased collection area were the only motivation for combining five telescopes, you might wonder why radio astronomers do not just build their telescopes next to each other in a single field. The reason is that radio telescopes are prone to picking up interference from Earth-based sources, such as radar, which is best mitigated if the telescopes are so far apart that they are unaffected by the same sources. So, if one of the telescopes is affected by some terrestrial source, the noise from this signal would not “correlate” with the signals from the other four telescopes, identifying it as a local anomaly that can be removed.

On the spectrum

Just like electromagnetic waves, gravitational waves sit on a very broad spectrum (figure 1). Their wavelengths range from hundreds of thousands of kilometres at their smallest, right up to, incredibly, a single wavelength spanning our entire cosmos – with a wave period of the age of the universe. Different types of experiment are hunting for waves in specific parts of this spectrum, from the laser interferometers at the small-wavelength end, through precision timing of pulsars at intermediate scales, to experiments that measure the cosmic microwave background in large areas of the sky (see figure 1). So while there is competition to make the first direct detection of a gravitational wave, each technique has its own territory within the spectrum. “These methods probe different physical environments, so they’re actually highly complementary to each other,” says Stappers.

This labelled rainbow-coloured rectangle on a black background goes from red on the left to violet on the right. The rectangle is labelled above with wave period, going from age of the universe on the far left, through years, hours and seconds in the middle, to milliseconds on the far right. The rectangle is labelled below with the corresponding frequencies in Hz, from 10^(-16) on the left to 10^(2) on the right. Above the rectangle there are sources labels, which include, for example, phase transitions in the early universe and binary stars in the galaxy. Below the rectangle are detectors labels. The detectors at a frequency of 10^(-16) Hz are Planck, WMAP, BICEP2, SPT, which all look at polarization of the cosmic microwave background. The detectors at a frequency of 10^-(8) Hz are EPTA, NANOGrav and Parkes, which all look at precision timing of millisecond pulsars. The LISA detector spans frequencies of 10^(-4) to 10^(-2), and the Big Bang Observer works at a frequency of 1 Hz; both are proposed laser interferometers in space. Finally, at a frequency of 10^(2) Hz are LIGO, VIRGO and GEO-600, all of which are laser interferometers on Earth

Pulsar astronomers are looking for gravitational waves that have a wavelength of longer than a light-year. In other words, a full period of the wave would take at least a year to pass by a point in space; for half of that year, the wave would stretch space in a particular direction, and for the other half it would compress it along that same direction. Wave periods of precisely a year are avoided because if a signal were detected that repeats once a year, it would be hard to rule out some unknown effect related to the Earth’s annual orbit around the Sun.

For millisecond pulsars, which rotate a few hundred times a minute, a 15 minute observation yields about half a million pulses. When these data are summed, or “folded”, they form an average pulse profile for that pulsar. It is best to measure rapidly rotating objects – i.e. those with narrow pulse profiles – that are also bright radio sources, so that a high signal-to-noise ratio can be achieved. Then, any shift in the curve’s position – corresponding to pulses arriving earlier or later than expected, possibly owing to the presence of a gravitational wave – can be measured to high precision.

The wide range of wavelengths of gravitational waves comes from the fact that they are created via very different phenomena. The prime targets for VIRGO and LIGO are “burst sources”, which arise from short-lived events, such as when two neutron stars or black holes merge. But when several waves from different sources meet and overlap, something called a “stochastic” gravitational-wave background is created. “Stochastic just means you can’t resolve the specific frequency of an individual source of gravitational waves,” Stappers explains. “You just know that there’s lots of gravitational-wave sources, effectively adding up to what you might call noise.” It is, Stappers says, like seeing a choppy swimming pool in which individual waves cannot be distinguished from one another.

What pulsar astronomers expect to see in particular is the stochastic gravitational-wave background created when today’s galaxies were formed. These galaxies are thought to have grown via the “hierarchical” model of galaxy formation, in which smaller galaxies merge to form bigger ones. Every galaxy is thought to have a supermassive black hole at its centre, and once a galaxy merger starts these black holes are expected to orbit each other – emitting gravitational waves that still resound today – before joining to become one even more massive black hole.

A coherent argument

Detecting gravitational waves would not be possible by observing only one or two pulsars because some unknown effect – such as a change in the pulsar’s interior – might affect the rate at which the pulsars spin. In fact, at least three pulsars are needed to rule out the possibility that something is changing in the pulsars themselves. “With an individual pulsar you can only ever place a limit on the presence of gravitational waves,” says Stappers, “because you can’t be sure that any variations in the arrival times are due to gravitational waves.”

A small depiction of our solar system is accompanied by pictures of two pulsars: one to the left and one directly above. The solar system is encapsulated by a rectangular cuboid, longer in the vertical direction, labelled a cube of space stretched by a gravitational wave. Light is depicted by a dashed yellow line to travel from each of the pulsars to the stretched cube of space. When the light passes into this stretched cube, the dashed line turns blue to make it clearer that the light travelling from the left has a shorter distance to travel once it is within the stretched cube than the light travelling from the pulsar above

And in terms of building up a strong signal, the more pulsars you monitor, the better. But even if you observe as many as, say, 40 pulsars, quantity alone would not suffice. If some of the pulses arrived early, some late and some as expected, how could you translate that into anything meaningful about gravitational waves?

Key to the hoped-for detection is an idea developed by Ronald Hellings and George Downs at NASA’s Jet Propulsion Laboratory in 1983, later applied to millisecond pulsars by Ralph Foster and Donald Backer in 1990. The thinking is that if a gravitational wave distorts space–time in our vicinity, we would expect pulses from pulsars in certain, diametrically opposite areas of the sky to arrive slightly later than expected, and pulses from some perpendicular direction to arrive slightly earlier than expected (figure 2).

For each pulsar, radio astronomers therefore determine how much earlier or later the pulses arrive than expected. Then, for each pair of pulsars, they calculate the level of correlation between these “timing residuals” – in other words, by how much the pair’s arrival times differ. This parameter is then plotted against the angle on the sky between the two pulsars and if these points fall on the “Hellings–Downs curve” (figure 3) it would indicate the detection of a gravitational wave. For a confident fit to this curve, multiple pulsars are needed, spread out across the sky as much as possible. “Only gravitational waves are able to create such a correlation between the times of arrival of the pulses and the positions in the sky,” says Sanidas.

Graph showing correlation between arrival times (with the axis ranging from -0.5 to 0.6) plotted against angle between pulsars (degrees) (with the axis ranging from 0 to 180). A thick red line begins at about (0, 0.5), drops in a shape like the trough of a sine wave to a minimum of about (80, -0.15) and back up to about (180, 0.3)

To claim a detection, the pulsar data would have to show a statistically significant clustering around the Hellings–Downs curve. But, so far, none of the pulsar groups have seen any correlation in their data – just noise. A detection could only be claimed once this random positioning of points starts to cluster around such a curve. However, this would be a gradual process with the points moving slowly over time from noise to a good fit.

Towards detection

Getting more data from pulsars is obviously the name of the game, which is why pulsar astronomers are eagerly awaiting construction of a massive new international facility – the Square Kilometre Array (SKA). Set to be located in southern Africa and Australia, the SKA will involve thousands of radio telescopes being built with a combined collecting area of approximately 1 km2, gathering much better – and much more – pulsar data. But with the first phase of the SKA not ready until 2023, the focus for now is on combining data to detect gravitational waves as soon as possible, which is why the EPTA has joined up with the Parkes Observatory in Australia and the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) to form the International Pulsar Timing Array (IPTA). According to Stappers, the collaboration is on the cusp of releasing its first data set consisting of the published data over the last year and a half or so.

The key factors in speeding up a detection are the number of pulsars, the number of years over which observations take place and the precision with which the pulses are timed. “What people have been doing for the last six, seven, eight years is continually improving the observing systems and finding new ways to combine our data to try and improve our sensitivity,” says Stappers. “In Australia they did a lot of pioneering work on this and really chased this hard for the first time. They really inspired people to ‘take it on’, as it were.”

As for when a detection might happen, a paper last year by NANOGrav physicist Xavier Siemens and colleagues predicts that a detection is possible within 10 years, and could happen as early as 2016 (Class. Quantum Grav. 30 224015). In another paper last year, EPTA researcher Alberto Sesana made an improved calculation of the gravitational-wave background expected to be caused by supermassive black-hole binaries (MNRAS 433 L1). “The result is that we expect the gravitational-wave signal from supermassive black-hole binaries might be stronger than we were expecting,” says Sanidas. “So we’re really positive that within the next couple of years we will make the first detection of the stochastic gravitational-wave background for supermassive black-hole binaries.”

Indirect evidence

Photo showing two telescopes on a snowy landscape, the right-hand one of which is atop a blue building on stilts

Although gravitational waves have not yet been detected directly, indirect evidence of their existence has been around for decades. The first such evidence came following the discovery in 1974 by Russell Hulse and Joseph Taylor of the University of Massachusetts Amherst of a “binary” pulsar, consisting of a pulsar and a companion neutron star orbiting a common centre of mass. Their analysis of the pulsar’s orbit showed that it is gradually getting smaller as it emits energy in the form of gravitational waves, which won them the 1993 Nobel Prize for Physics. Last year, more indirect evidence came from the South Pole Telescope, which observed a subtle twist in the light that makes up the cosmic microwave background (CMB) – the radiation left over from the Big Bang that still permeates our universe. This twist indicates that gravitational waves were formed when the early universe is thought to have expanded very rapidly in the period known as “inflation”.

In March this year, this finding was backed up by astronomers at the Background Imaging of Cosmic Extragalactic Polarization (BICEP2) telescope, also located at the South Pole, who announced that they had detected these primordial gravitational waves because they had seen the polarization signature these waves are expected to have left behind in the CMB. This result was published in Physical Review Letters in June (112 241101), but many in the cosmology community remained unconvinced because it did not agree completely with results from the Planck satellite – a space telescope that measures the CMB in detail. Shortly before Physics World went to press, the Planck collaboration released yet more results, which suggest that the entire BICEP2 signal is down to the team not having properly accounted for the effect of dust in our galaxy on the light they’re measuring, rather than to any signal from the early universe.

The BICEP2 results, which hit the headlines back in March, did not dampen the enthusiasm of the pulsar community, which is interested in studying gravitational waves that originate from a completely different source and sit in a different part of the gravitational-wave spectrum (see figure 1). In fact, Ben Stappers, a pulsar astronomer at the Jodrell Bank Observatory in Cheshire, UK, said that it made them even more excited. “The first evidence that gravitational waves existed came from the so-called Hulse–Taylor binary pulsar,” he says, “and here is possibly more evidence that gravitational waves are a reality, and so it just encourages us to speed up our ability to make a detection ourselves.”

Radio’s rivals

The advanced generation of LIGO and VIRGO laser interferometers are due to come online in 2015 and 2016, respectively, so the pressure is on for the pulsar-timing community to make a detection soon, especially with a Nobel prize possibly up for grabs.

But the IPTA has other goals too, beyond just detecting gravitational waves – it is, for example, working on what’s called a pulsar-based timescale, which would involve seeing if it is possible to generate a measure of time using just pulsars. “That’s interesting because if there is anything specific about the Earth that affects how we measure time, then we’ll be able to check that,” says Stappers. But with a first detection will come another exciting possibility – that researchers can actually start doing gravitational-wave astronomy.

As Sanidas puts it, “It’s gonna be a revolution.”

US targets novel fusion research

A US government agency has launched a new $30m programme to support alternative approaches to generating energy from nuclear fusion. The initiative has been created by the Advanced Research Projects Agency – Energy (ARPA-E), which falls under the auspices of the Department of Energy (DOE). In August, the DOE invited researchers to “develop and demonstrate low-cost tools to aid in the development of fusion power”. Research teams need to outline their proposals by 14 October with three-year grants ranging from $250,000 to $10m up for grabs.

Fusion researchers have welcomed the new programme, which comes as fusion research in the US faces severe budget constraints. As one of seven partners in the €16bn ITER fusion project, the country has to provide 9% of the reactor’s components – at a cost of $3.9bn – despite a flat overall national fusion budget, which has put a squeeze on domestic fusion facilities. Next year’s budget is also far from certain after the White House recommended static spending, the House of Representatives called for an increase and the Senate even voted to kill the US contribution to ITER.

Budget casualty

One of the casualties of this ongoing budget squeeze was a DOE project called High Energy Density Plasma (HEDP), which was cancelled in 2013. This programme had supported projects lying between the low-density, long-duration approach of magnetically confined fusion – like ITER – and the very fast, very high density of inertial-confinement fusion, as carried out at the US’s National Ignition Facility. The demise of HEDP ended projects at several US national laboratories that used electrical pulses, magnetic fields, lasers and even high explosives to achieve fusion.

The new programme from ARPA-E will tap into this middle ground, focusing both on “targets” (methods for containing plasmas) and “drivers” (systems for heating and compressing plasmas). “I have long advocated that the parameter space in-between conventional [magnetic-fusion and inertial-fusion] regimes is clearly where the advantages of [both] can be combined, while eliminating some of the disadvantages,” says plasma physicist Glen Wurden of the Los Alamos National Laboratory in New Mexico, who works on magnetized plasmas.

“Members of the HEDP fusion community, especially those previously working in the area of magneto-inertial fusion before the funding was cut, were thrilled to finally see the ARPA-E funding opportunity announced,” he adds.

What is an inverse problem?

You may never have heard of an “inverse problem”, but as Roy Pike of King’s College London explains in this video, it is a way of looking at many different questions within science. It refers to a method of estimating data that are not there. Or to be more precise, it refers to the process of estimating data that are not obtainable using direct measurements – perhaps because they are lost during an experiment, or perhaps because they were never measured accurately in the first place.

Pike gives the example of a pinhole camera used to produce an image of an object. A “direct problem” would be to work out how the image will appear, which can be done by using tools such as Maxwell’s equations and propagation diffraction theory to calculate how the light will disperse. An “inverse problem”, however, would be to calculate the nature of the object based on the data contained in the image you have produced, as some data are inevitably not present, particularly at the blurred edges of the image. “The direct problem is relatively easy, the inverse one is impossible,” says Pike.

In order to make educated guesses at the missing data in these situations, researchers such as Pike have a range of mathematical tools at their disposal.

You can also watch this video about how an inverse approach is being applied within speech science.

Watch more from our 100 Second Science video series.

Quantum data are compressed for the first time

A quantum analogue of data compression has been demonstrated for the first time in the lab. Physicists working in Canada and Japan have squeezed quantum information contained in three quantum bits (qubits) into two qubits. The technique could pave the way for a more effective use of quantum memories and offers a new method of testing quantum logic devices.

Compression of classical data is a simple procedure that allows a string of information to take up less space in a computer’s memory. Given an unadulterated string of, for example, 1000 binary values, a computer could simply record the frequency of the 1s and 0s, which might require just a dozen or so binary values. Recording the information about the order of those 1s and 0s would require a slightly longer string, but it would probably still be shorter than the original sequence.

Quantum data are rather different, and it is not possible to simply determine the frequencies of 1s and 0s in a string of quantum information. The problem comes down to the peculiar nature of qubits, which, unlike classical bits, can be a 1, a 0 or some “superposition” of both values. A user can indeed perform a measurement to record the “one-ness” of a qubit, but such a measurement would destroy any information about that qubit’s “zero-ness”. What is more, if a user then measures a second qubit prepared in an identical way, he or she might find a different value for its “one-ness” – because qubits do not specify unique values but only the probability of measurement outcomes. This latter trait would seem to preclude the possibility of compressing even identical qubits, because there is no way of predicting what classical values they will ultimately manifest as.

A way forward

In 2010 physicists Martin Plesch and Vladimír Bužek of the Slovak Academy of Sciences in Bratislava realized that, while it is not possible to compress quantum data to the same extent as classical data, some compression can be achieved. As long as the quantum nature of a string of identically prepared qubits is preserved, they said, it should be possible to feed them through a circuit that records only their probabilistic natures. Such a recording would require exponentially fewer qubits, and would allow a user to easily store the quantum information in a quantum memory, which is currently a limited resource. Then at some later time, the user could decide what type of measurement to perform on the data.

“This way you can store the qubits until you know what question you’re interested in,” says Aephraim Steinberg of the University of Toronto. “Then you can measure x if you want to know x; and if you want to know z, you can measure z – whereas if you don’t store the qubits, you have to choose which measurements you want to do right now.”

Now, Steinberg and his colleagues have demonstrated working quantum compression for the first time with photon qubits. Because photon qubits are currently very difficult to process in quantum logic gates, Steinberg’s group resorted to a technique known as measurement-based quantum computing, in which the outcomes of a logic gate are “built in” to qubits that are prepared and entangled at the same source. The details are complex, but the researchers managed to transfer the probabilistic nature of three qubits into two qubits.

A nice trick

Plesch says that this is the first time that compression of quantum data has been realized, and believes Steinberg and colleagues have come up with a “nice trick” to make it work. “This approach is, however, hard to scale to a larger number of qubits,” Plesch adds. “Having said that, I consider the presented work as a very nice proof-of-concept for the future.”

Steinberg thinks that larger-scale quantum compression might be possible with different types of qubits, such as trapped ions, which have so far proved easier to manage in large ensembles. A practical use for the process would be in testing quantum devices using a process known as quantum tomography, in which many identically prepared qubits are sent through a quantum device to check that it is functioning properly. With quantum compression, says Steinberg, one could perform the tomography experiment and then decide later what aspect of the device you wanted to test.

But in the meantime, says Steinberg, the demonstration provides another perspective on the strangeness of the quantum world. “If you had a book filled just with ones, you could simply tell your friend that it’s a book filled with ones,” he says. “But quantum mechanically, that’s already not true. Even if I gave you a billion identically prepared photons, you could get different information from each one. To describe their states completely would require infinite classical information.”

The research will be described in Physical Review Letters.

Relive CERN’s highlights as the lab turns 60

CERN has been celebrating its 60th anniversary all this month, but it was in fact six decades ago today – on Wednesday 29 September 1954 – that the lab’s convention was ratified by its first 12 member states: Belgium, Denmark, France, Germany, Greece, Italy, the Netherlands, Norway, Sweden, Switzerland, the UK and Yugoslavia.

Physics World has played its own small part in marking the anniversary, including a careers feature on what skills you need as CERN director-general, a day-in-the-life blog written by current CERN boss Rolf-Dieter Heuer, and an appearance at the lab’s TEDx event last week by our columnist Robert P Crease.

This blog entry rounds off our coverage of CERN at 60 with a few links to classic material from our archives.

(more…)

A look back at how the dust fell on BICEP2

It’s been five days since the metaphorical dust settled on the apparent “discovery” of the B-mode polarization of the cosmic microwave background that was reported in March. The claim came from the team behind the Background Imaging of Cosmic Extragalactic Polarization (BICEP2) telescope at the South Pole, and much has been said since about what was then hailed as one of the biggest scientific discoveries of the decade.

(more…)

The pin-up of particle physics, an octopus-inspired robot and Witten versus Horgan redux

One of my favourite radio programmes is The Life Scientific, in which the physicist Jim Al-Khalili talks to leading scientists about their lives and work. Al-Khalili introduces this week’s guest as “the pin-up of particle physics”, whose remarkable career has taken him from playing keyboards in pop bands, to winning a Royal Society University Research Fellowship to do particle physics, to hosting one of the BBC’s most popular science programmes.

(more…)

Japan seeks to splurge on big-science facilities

Physics in Japan is set for a major boost after the education ministry asked for a massive 18% increase for its 2015 science and technology budget to take it to $11.1bn. Support for major facilities – including the SPring-8 synchrotron and the SACLA X-ray free-electron laser, both in Hyōgo Prefecture, and the Japan Proton Accelerator Research Complex (J-PARC) in Tokaimura – would rise 15.6% to $960m. The finance ministry, however, is likely to squeeze the requested amounts before the budget, which takes effect from next April, goes before the legislature in December.

The money for SACLA and SPring-8 would mean the facilities could run for an additional 750 and 1000 hours, respectively, and also fund an upgrade at SACLA. At J-PARC, the cash would go on overall operations plus maintenance and safety upgrades. The ministry’s request also includes $11m to finish the Large-Scale Cryogenic Gravitational Wave Telescope (also known as KAGRA).

Built in the Ikenoyama Mountain in Kamioka, KAGRA features two 3 km-long arms forming an “L” for the detector plus two access tunnels. Some 7.7 km of tunnels were completed earlier this year that will be used for the experiment. “[The budget allocation] would allow us to complete equipment development and installation,” says KAGRA project director Takaaki Kajita, who is based at the University of Tokyo’s Institute for Cosmic Ray Research in Kashiwa. The facility is expected to be complete by the end of next year and start operations in 2017.

For ongoing international projects, the ministry is seeking $54m for Japan’s contribution to the Thirty Meter Telescope being built on Mauna Kea in Hawaii, as well as $260m for ITER, the experimental fusion reactor currently under construction in Cadarache, France.

The ministry also aims to spend $1m to continue studies for the proposed International Linear Collider (ILC), which Japan has expressed an interest in hosting. This year the government set up a committee to investigate the scientific case for the facility, with sub-committees looking at technical issues and cost. Satoru Yamashita, a physicist at the University of Tokyo who chairs Japan’s ILC Strategy Council, says the country took a step towards international support for the $10bn project with initial political-level discussions with the US in July. “There is still a lot to do,” adds Yamashita.

The possibility of a fourth type of neutrino

The list is long of unsolved problems in physics. Still, if you surveyed a number of people in the discipline they would probably agree on a few choices. There would be the question of why neutrinos have a small but significant mass, contradicting the zero mass specified by the current Standard Model of particle physics. Their lists would also include the nature of dark matter, that elusive substance that is thought to make up more than four-fifths of all matter in the universe. And there would probably also be a mention of why the universe is here at all – that is, why all the matter we see today was not annihilated by an equal amount of antimatter shortly after the Big Bang.

What if just one type of particle could solve all these problems? The idea may sound too simple to be true, but it is exactly the possibility raised by the sterile neutrino. A hypothetical particle that does not interact via any of nature’s four known forces except gravity, the sterile neutrino would be the universe’s most ghostly entity. Yet its effects would be very real: it could solve three of the biggest mysteries in physics, and maybe others besides.

Sterile neutrinos are not a new proposition – their theoretical history dates back to the 1970s – but until recently they have been only of niche interest. That is because for a long time the main driving force of particle physics has been pushing the “energy frontier” – the obvious example being the Large Hadron Collider (LHC) at the European lab CERN, which has been trying to crack open nature’s secrets with ever-stronger collisions. Once the LHC started to collect data in 2010, many physicists were expecting to be flooded with a torrent of new particles, particularly those given by supersymmetry – a popular theory that aims to solve many problems in physics by partnering the currently known elementary particles with a host of meatier “sparticles”. But when the LHC’s floodgates were opened, the river was dry: no new physics has been found.

With hopes for evidence of supersymmetry waning, lesser-studied topics such as sterile neutrinos are beginning to garner more attention. But disappointments at the high-energy frontier have not been the only prompt for a change in fashion. In the past few years, strong new evidence for sterile neutrinos has been found in nuclear reactors, bolstering the existing evidence from particle accelerators and radioactive sources. And, earlier this year, astrophysicists examining data from X-ray telescopes uncovered the first tentative evidence of sterile-neutrino dark matter in the distant cosmos. Emboldened by such results, many researchers are beginning to think the long-awaited breakthrough in particle physics may come not in a slew of different types of particle, but just one.

The long-awaited breakthrough in particle physics may come not in a slew of different particles but just one

“If we found a sterile neutrino, it would be the first time we were totally outside the Standard Model,” says experimental physicist Roxanne Guenette at the University of Oxford in the UK. “It would be a small extension, but with implications as important as discovering supersymmetry.”

Back to normality

The theoretical motivation for sterile neutrinos owes a lot to studies of normal neutrinos, which are sometimes called “active neutrinos” by comparison. These particles are themselves rather ghostly, interacting via only gravity and the weak force, and not via nature’s other two known forces, the strong and the electromagnetic. They were first proposed in the 1930s by the Austrian theorist Wolfgang Pauli to account for missing energy in nuclear decay, and were discovered about two decades later by experimental physicists including the Americans Clyde Cowan and Frederick Reines, the latter of whom won the 1995 Nobel Prize for Physics for the work. We now know that active neutrinos come in three types or “flavours”, one for each charged lepton: the electron neutrino, the muon neutrino and the tau neutrino (each with an associated antiparticle).

Nobel physicist Ray Davis

According to the original Standard Model, these three neutrinos were supposed to be massless, but that criterion soon began to crumble. In 1968 a team led by Ray Davis at the Brookhaven National Laboratory in the US found that it could detect only about a third of the electron neutrinos predicted to be arriving at its detector from fusion processes in the Sun, a result that was confirmed 20 years later by the Kamiokande experiment in Japan. Then, in 1998, a larger version of Kamiokande called SuperKamiokande, or SuperK, confirmed another strange result, this time about neutrinos generated in the atmosphere by cosmic rays. Theoretical models had predicted that there ought to be twice as many muon neutrinos generated by cosmic rays as electron neutrinos – but the researchers found roughly equal numbers of each.

Together, these solar and atmospheric anomalies – which led to Nobel prizes in 2002 for Davis and for Masatoshi Koshiba of SuperK – demonstrated that neutrinos must change flavour or “oscillate” as they travel. In Davis’s experiment the detector was sensitive only to electron neutrinos, and had therefore been oblivious to those that had oscillated into muon or tau neutrinos en route from the Sun. Meanwhile, oscillations taking place between the atmosphere and ground level had skewed the precise ratio of muon-to-electron neutrinos that the SuperK researchers had expected.

The fact that neutrinos oscillate suggested that they had mass, for if they didn’t, they would travel at the speed of light, and would not experience any time in which to oscillate. More specifically, though, the oscillations suggested that the three flavour states of neutrinos – electron, muon and tau – were actually mixtures of three distinct mass states. A neutrino in a pure flavour state contains a certain ratio of these three mass states, but during propagation these get out of step with one another. After a distance, the ratio of the mass states can become so distorted that, upon detection, the neutrino manifests as a different flavour altogether; what was once a muon neutrino might instead appear as an electron neutrino; and so on.

The SuperKamiokande experiment in Japan

Although oscillations imply mass, particle physicists struggle to directly measure the individual masses of active neutrinos because the masses are so small; instead, they have access only to the difference between squared masses – specifically, the difference between the first and second squared-mass states, Δm122, and the difference between the second and third squared-mass states, Δm232. These parameters can be calculated by studying neutrinos that have been generated in particle accelerators or nuclear reactors and then have travelled great distances, typically hundreds or thousands of kilometres. The energy of neutrinos, E, and the distance over which they oscillate, L, are the two most important parameters for calculating the squared-mass differences, because the probability of oscillation is a function of both Δm2 and L/E.

Currently  Δm122 looks to be about 7 × 10–5 eV2, while  Δm232 looks to be about 2.3 × 10–3 eV2, making active neutrinos more than a million times lighter than the next lightest particle, the electron. In 1996, however, physicists working on the Liquid Scintillator Neutrino Detector (LSND) experiment at the Los Alamos National Laboratory in the US found a considerable proportion of electron antineutrinos in a beam of muon antineutrinos generated in an accelerator just 30 m away. The neutrino energy and oscillation distance suggested the existence of a mass-squared difference of around 1 eV2 – far greater than either  Δm122 or  Δm232, which alone are sufficient to define the three known neutrinos.

If the mass-squared difference given by the LSND was real, it suggested the existence of a fourth type of neutrino. But if there were a fourth neutrino, it could not be ordinary: experiments at CERN had already shown that there could be only three neutrinos coupling to the weak force in this mass range. In other words, the fourth neutrino, if it exists, must be largely immune, or “sterile”, to the weak force: it could interact only via gravity.

Breaking the model

If neutrino oscillations were troublesome enough for the Standard Model, the existence of a sterile neutrino would be its downfall. Since the Standard Model’s formulation in the late 1960s, no new particles have been found outside it; the Higgs boson, discovered in 2012 at the LHC, was considered to be the final piece of the Standard Model jigsaw.

If neutrino oscillations were troublesome enough for the Standard Model, the existence of a sterile neutrino would be its downfall

Still, the LSND’s result was not accepted outright, and other physicists set out to check it. At the beginning of this century, researchers at the Mini Booster Neutrino Experiment (MiniBooNE) – a detector at Fermilab in the US consisting of 720 tonnes of mineral oil lined with more than a thousand photomultiplier tubes – examined a beam of muon neutrinos arriving from a source 500 m away. The result, announced in 2007, was null: unlike the LSND result, no oscillations were found for the 1 eV2 mass-squared difference. But many physicists believed that could be because MiniBooNE was using neutrinos, not antineutrinos, and was therefore not a proper comparison. Three years later, the MiniBooNE team had repeated the experiment using antineutrinos and had a new result: a spike in electron antineutrinos – and support for the LSND’s finding.

“When one experiment gives an extremely unexpected result, people are very sceptical,” says Guenette, who is working on the successor to the MiniBooNE experiment. “People thought [the LSND result] was more likely a problem with the detector. So when the MiniBooNE result came out, everybody said ‘Hmm’. It was unlikely to be a detector problem, because both detectors were different.”

Physicist Georgia Karagiorgi

In the year after the MiniBooNE confirmation, support for the sterile neutrino was bolstered from a very different set of sources: nuclear reactors. Inside reactors, nuclear fission generates various neutron-rich nuclei that subsequently beta-decay into lighter nuclei. Beta decay always involves the emission of an electron or antielectron (positron), and almost always involves the emission of an electron neutrino or antineutrino.

In 2011 David Lhuillier at the Alternative Energies and Atomic Energy Commission in Saclay, France, and colleagues re-evaluated the number of electron antineutrinos that nuclear reactors ought to have been emitting over the past 30 years, and found that, between 10 and 100 metres from the reactors, they were coming up about 6% short. At this proximity, and with the energies involved, the electron antineutrinos were unlikely to be oscillating into any of the known active neutrinos, so the most obvious explanation was that some of them were oscillating into a fourth, more massive neutrino.

Perhaps the reactor evidence for sterile neutrinos should not have been surprising. Beginning in 1995, the solar neutrino experiments GALLEX at the Gran Sasso National Laboratory in Italy and SAGE at the Baksan Neutrino Observatory in Russia used known radioactive sources – chromium-51 and argon-37, both of which undergo “inverse” beta decay – for detector calibration. Again, the experimentalists had found a deficit in the expected count rate of electron neutrinos, this time of 5–20%. Nonetheless, it has been Lhuillier and colleagues’ more recent analysis of nuclear reactors that has really made people take notice of sterile neutrinos.

“Our work on the prediction of neutrino flux was initially completely disconnected from this topic,” says Lhuillier. “Today we still don’t know if sterile neutrinos exist or not, but our work triggered new interest.”

In February this year, evidence for sterile neutrinos went extraterrestrial. Searching through data from the European Space Agency’s XMM-Newton space telescope and NASA’s Chandra X-ray telescope, two independent groups – Esra Bulbul at the Harvard-Smithsonian Center for Astrophysics in the US and colleagues, and Alexey Boyarsky at Leiden University in the Netherlands and colleagues – found an excess of X-rays at about 3.5 keV. The researchers are cautious in drawing firm conclusions, but again there is an obvious explanation: the decay of dark matter in the distant cosmos. Being invisible, yet still interacting with gravity, sterile neutrinos would be an ideal candidate for dark matter, and 3.5 keV is about the energy of the X-rays into which they are expected to decay.

Front-page news

Solving the mystery of dark matter would be a major breakthrough in physics – one that would certainly make the front page of newspapers worldwide. But sterile neutrinos could solve several other mysteries, too. One of these is why the universe today is composed largely of matter and not antimatter: the Big Bang ought to have generated equal amounts of each, so the fact that they did not annihilate each other entirely – and that the universe as we know it exists at all – suggests that matter somehow managed to win over.

Many particle physicists believe the dominance of matter is a result of a phenomenon known as charge–parity (CP) violation. Preservation of CP is a technical way of saying that antiparticles interact in exactly the same way as their particle counterparts, albeit in mirror-reverse. In the Standard Model, CP is enforced by a symmetry in the theory that underpins particle interactions. But no such symmetry exists in the blueprint for sterile neutrinos, which means that when they decay they could more readily produce matter than antimatter. Perhaps, in the early universe, a decay of sterile neutrinos en masse laid the foundations for the matter-based planets, stars and galaxies we see today.

Then there are the neutrino masses themselves. These cannot be explained easily by the Higgs, which gives the masses of many other particles in the Standard Model, because the neutrinos would have to couple to it in an oddly weak manner. However, their masses could be explained with the so-called seesaw mechanism. The details of this are complex, but the idea is that heavy sterile neutrinos would “mix” with the known active neutrinos, lifting their masses slightly above zero. In fact, the seesaw mechanism, which was developed by the Swiss theoretical physicist Peter Minkowski and others in the 1970s, provided the first theoretical basis for sterile neutrinos.

Dark matter, CP violation, neutrino masses – at a glance you might wonder why the sterile neutrino has not always been a target for experimental particle physics. The reason probably lies in the nature of the sterile neutrino itself. While the results from terrestrial accelerator, reactor and radioactive-source experiments are mostly compatible with a mass-squared difference of about 1 eV2, dark matter would need a mass-squared difference of the order of 1 keV2, and CP violation would need a mass-squared difference of 100 GeV2 or more. Somewhat frustratingly, the original solar and atmospheric active-neutrino oscillations give no hint of what the masses of the sterile-neutrino should be. “Their mass can be anything – from 0.05 eV to 1015 GeV,” says Oleg Ruchayskiy, a particle physicist at the Swiss Federal Institute of Technology in Lausanne. “Really anything.”

One might think the simplest solution to three mysteries would be the existence of three sterile neutrinos – one to account for the oscillations seen in the LSND and other terrestrial experiments (with a mass of ~1 eV), one to account for dark matter (~1 keV) and one to account for the dominance of matter via CP violation (~100 GeV). To be sure, the existence of three sterile neutrinos would neatly mirror the known existence of three active neutrinos. But it turns out that even this scenario is problematic, because CP violation alone actually requires the existence of two sterile neutrinos with masses of around 100 GeV – meaning that if sterile neutrinos are to solve all the mysteries, there must be more than three of them. Partly for that reason, cosmologists often ignore the oscillations seen in the LSND and elsewhere and see sterile neutrinos only as a solution to dark matter and CP violation; for this scenario, they turn to a model known as the neutrino minimal standard model, which contains the three sterile neutrinos necessary for that purpose. If theorists do want to clear up the terrestrial oscillation results as well, they will have to turn to a heftier Grand Unified Theory, which can contain four – or indeed many more – sterile neutrinos.

Sterile neutrinos may not be as simple a solution to the biggest mysteries as they might at first seem

So, sterile neutrinos may not be as simple a solution to the biggest mysteries as they might at first seem. Still, there appears to be a growing desire among particle physicists to find out, once and for all, whether they exist.

The Perseus Cluster

For sterile neutrinos at dark-matter masses (those at kilo-electronvolt scales), X-ray telescopes such as XMM-Newton, Chandra and the Japan Aerospace Exploration Agency’s Suzaku could provide more data that will settle the question. Although these cannot provide direct evidence for sterile neutrinos, a signal that varies correctly in proportion to the source – that is, more X-rays emanating from galaxy clusters than from emptier regions of space – would be strong evidence in favour of sterile-neutrino dark matter.

Meanwhile, studies of the cosmic microwave background (CMB) – the oldest light in the universe – can provide constraints on how light a sterile neutrino could be. Any particle with a very small mass can travel at relativistic speeds – that is, close to that of light – enabling it to transport energy quickly from one region of space to another. In the early universe, this process was crucial for the formation of the first cosmic structures, and it turns out that measurements of the CMB can place limits on the masses of each of the relativistic particle species added together. According to ESA’s Planck satellite, this figure is about 0.2 eV, which goes against the existence of a 1 eV sterile neutrino.

But there is another cosmological parameter that might yet go in favour of a new particle. In conjunction with certain other measurements, measurements of the CMB can provide an estimate of the total number of relativistic neutrino species in the early universe, neff. A few years ago this parameter was calculated to be about 4; after the latest analysis from Planck, neff came down to about 3.3. Given its uncertainty of ±0.3, the result is now compatible with three light neutrinos, but optimists see room for hope. “It seems to want to be a value greater than three,” says Jon Link, an experimental neutrino physicist at Virginia Tech in the US.

Well grounded

The most concerted effort to find sterile neutrinos, however, is back on Earth. A white paper authored by sterile-neutrino specialists in 2012 lists more than 20 proposed experiments to search for the particles. These range from accelerator to reactor and radioactive-source experiments; from experiments that search for electron-neutrino disappearance to those that search for muon-neutrino disappearance or muon-to-electron neutrino transitions. But, “realistically, only five or so of these will be pursued, and maybe only two funded”, says Guenette.

Guenette is working on one of those that is being funded – a successor to MiniBooNE, called MicroBooNE. A 150 tonne tank of liquid argon, MicroBooNE ought to be able to rule out the main concern about MiniBooNE’s 2010 result: that the detectors mistook photons – a very normal feature of background noise – for electron antineutrinos. That is because argon is less sensitive to a photon background than the mineral oil used in MiniBooNE.

Commissioning for MicroBooNE begins this autumn, and Guenette expects the data analysis to take three years. True, more evidence for a 1 eV sterile neutrino will not necessarily solve any cosmological mysteries. But Guenette believes a discovery would make the concept of heavier sterile neutrinos more palatable.

Joachim Kopp, a theorist at the Max Planck Institute for Nuclear Physics in Heidelberg, Germany, agrees. “There is no scientific argument for why a sterile neutrino at electronvolt scales would imply the existence of others,” he says. “But I would say it would make theorists more comfortable about the idea.”

How to give a great TEDx talk

Bob Crease at TEDx, CERN, 24 September 2014

By Robert P Crease in CERN, Geneva

It’s great to go first.

Then you can actually listen to the other performances without fretting about your own. Somewhere near the middle of my TEDxCERN talk yesterday (Wednesday 24 September) I stopped being aware of the timer at my feet, began to have fun and left the stage at the end without even noticing whether I had exceeded my time limit. I made a brief stop backstage to lose my “Madonna” – a microphone that’s not on a neck clip or attached to a headset but extends out from an ear brace – then retook my seat in the front row.

(more…)

Copyright © 2026 by IOP Publishing Ltd and individual contributors