Skip to main content

Relativity’s new revolution

Einstein’s equations of general relativity are like the Himalayan mountains – beautiful and majestic when viewed from a distance, but slippery and full of crevasses when explored up close. Of those who venture into them, not everyone comes back alive. A set of 10 independent, nonlinear partial differential equations, Einstein’s equations relate the energy and matter in a region of space to its geometry. Astonishingly simple when expressed in the geometric, coordinate-independent language of tensors that Einstein ultimately hit upon, the equations – when applied to real situations – unfortunately become coupled beasts unlike anything physicists had tamed since the days of Newton.

Einstein’s equations of general relativity can be solved exactly only in a handful of cases – with one of the first such solutions, and perhaps its most famous, being that derived by the German astronomer Karl Schwarzschild in 1916 for the simple case of a static, spherical, uncharged mass in a vacuum. Schwarzschild’s assumptions, and his mathematical wizardry, reduced the Einstein equations to a single, ordinary differential equation that he was able to readily solve, though even the master himself was surprised at the possibility of an exact solution. The “Schwarzschild solution” leads naturally to the concept of a black hole (see “Black holes: the inside story”), although Schwarzschild himself never grasped the significance of the singularity in his solution, dying four months later on the Russian Front during the First World War. Even Einstein thought the Schwarzschild singularity – the radius where the solution is invalid because of division by zero – was physically meaningless, and it was only decades later that the depths of the Schwarzschild solution were plumbed in general relativity’s first golden age, which ran from about 1960 to 1975, by Roger Penrose, Kip Thorne, Stephen Hawking and many others besides.

As theories go, general relativity has been a great success. Most famously, its early approximate solutions accounted for a well-known discrepancy in the orbit of the planet Mercury that could not be completely accounted for using classical Newtonian physics, yielding a value for the difference that agreed spot-on with astronomical measurements. Einstein’s equations have also predicted that light bends in a gravitational field and even that radar signals are delayed when bounced off one of our solar system’s inner planets. However, these successes are all based on the “post-Newtonian” approximation of the full Einstein equations, where speeds are small compared with that of light and gravitational fields are weak. Einstein’s general relativity has never been tested in the vastly different “strong field” regime.

Thanks, however, to fast and powerful supercomputers, physicists can now crunch by brute force through Einstein’s equations using advanced computational algorithms. Using what is known as “numerical relativity”, we can explore physical regimes where space–time is far from the simple, flat, 4D world of special relativity, obtaining exact solutions even where gravity is strong and so space and time are stretched and twisted. Indeed, theorists have already made some major breakthroughs in solving Einstein’s equations with computers, leading to specific predictions that astronomers can now test.

With analysis and observation converging, new insights have been gained into some of the most energetic and spectacular phenomena in the universe that are, in turn, pushing the numerical relativists to study even more complex systems, in realms that physics has never delved before. These new methods have uncovered the possibility of “rogue” black holes, kicked from their galactic lairs to rush silently through intergalactic space. They have even become a tool for understanding the dynamics of black-hole pairs, for probing the equation of state of neutron stars, and for helping us to design future space-borne detectors for hunting gravitational waves – tiny oscillations in the fabric of space–time itself. It is, it has been said, general relativity’s new golden age.

Subtle and malicious

Relativists had been trying since the 1960s to numerically solve Einstein’s equations, but extracting the physics from even simple cases proved exceedingly difficult. Early on, theorists formulated clever ways to package the problem for a computer by dividing 4D space–time into a stack of 3D surfaces labelled by a time parameter. But those using such approaches found that their computer programs crashed after becoming unstable or suffering large numerical errors – even in simple cases such as two black holes colliding head-on. It seemed that Einstein might have been wrong after all: the Lord was both subtle and malicious.

The problem gained added urgency in the 1990s as the US began planning the Laser Interferometer Gravitational-Wave Observatory (LIGO) – two giant interferometers in Washington and Louisiana that eventually began taking data in 2002 in the still-ongoing quest to detect gravitational waves. In order to extract the tiny gravitational-wave signals from background noise, designers needed to know the exact form of gravitational waves that hopefully would wash over the apparatus – in particular their amplitudes and frequencies – because this would determine precisely by how much, and how fast, the arms of the interferometer would change in length. But at the time, theorists investigating the astrophysical phenomena that were expected to generate such waves, especially two black holes merging, could only help LIGO’s designers in general terms. “In the 1990s the Einstein equations for two black holes colliding became the holy grail of general relativity,” recalls Laura Cadonati, a gravitational phenomenologist at the University of Massachusetts at Amherst, who applies numerical results to astrophysical systems.

The problem was not simple. In addition to producing instabilities, the programs ultimately needed to span a time period long enough to cover the last few inspirals of an orbiting black-hole pair, their merger and subsequent settling (called the “ringdown”) of the final black hole. The relativists were stuck: their computers, and especially their methods, could address various parts of the problem – in two spatial dimensions, or just up until the merger – but not for an entire event that could occur in the real universe.

Then, in 2005, a postdoc at the California Institute of Technology, working largely on his own, stunned the relativity community with a stable numerical simulation of two equal-mass, initially non-spinning black holes from their single, last orbit to the ringdown (figure 1). Frans Pretorius formulated the Einstein equations in a different way from how others were doing it, leaving him with fewer and slightly simpler equations to solve. His trick was to use coordinates that made the partial differential equations describing the changes in space–time identical to the standard wave equation that physicists knew and loved so well.

“Several things came together,” says Pretorius, recalling his triumph, adding that “there was luck involved, too”. Pretorius eventually spent two years on the problem, which he says involved helpful insights from colleagues including David Garfinkle and Carsten Gundlach, lots of coding elbow grease and a supercomputer program that ran off-and-on for two months. It was, for him, “pure agony”.

Pretorius, who is now at Princeton University, found that the merger yielded a single spinning black hole weighing 1.90 times the mass of one of the initial black holes. It had an angular momentum of about 0.70 times the square of the final black-hole mass, and roughly 5% of the total initial mass was radiated away as gravitational waves – figures that no-one had calculated before. Pretorius also computed the detailed waveform of the emissions in terms of a scalar function that classifies space–time, which can be related to the time-varying amplitude of a gravitational wave and, in turn, the minute fractional changes in the length of the arms of a gravitational-wave detector. As his program kept going without crashing, Pretorius thought “Oh God, this can work” until he experienced what he says was “instant gratification with an endorphin rush” when it was finally complete. Pretorius’s result, now known as the generalized harmonic formulation, broke the field’s log jam.

Later that year researchers at the University of Texas at Brownsville and NASA’s Goddard Space Flight Center independently developed another technique for black-hole numerical solutions, called the “moving puncture” method, that was promptly adopted by much of the community because it was more accurate, albeit at the expense of greater computational complexity. A rough 2D analogue is a model of space–time where two parallel sheets of cloth, each with a disc removed at a black hole’s event horizon, are sewn together around the disc perimeters. These punctures – the interiors of the black holes removed from the computational domain – then move around the lattice grid that represents space–time as the computation progresses, revealing the motion through time of the black holes’ horizons.

“Very quickly everyone got it, along both approaches,” says Luis Lehner of the Perimeter Institute for Theoretical Physics in Waterloo and the University of Guelph, both in Canada. The challenge now for Lehner and others was to find out “how fast can we get the answers out, and where can we go looking for the unexpected to further our understanding and raise further questions”. Researchers at Goddard soon computed the merger of unequal-mass black holes for the first time, studying in the process the accompanying recoil of the final black hole. The result was found to depend only on the ratio of the masses of the merging black holes, not their individual values, making the calculated gravitational waveform applicable to a range of astrophysical situations. The overall energy released in the process – and the time taken for the two holes to merge – is proportional to the total mass, meaning that the merger could briefly outshine all the stars in the universe combined.

These first simulations were of black holes that were initially not spinning before they collided, and it was not long before a research group at the University of Texas at Brownsville carried out the first investigation of the merger of spinning black holes – both with their spin axes aligned and misaligned. Indeed, continuing advances in technique and computing power allowed researchers to calculate what happens as these spinning black holes collide over a range of different orbits. Theorists and experimentalists began to mix, not quite as cats and dogs but, as Cadonati politely puts it, “to improve the potential of gravitational-wave science and how that matches with astrophysics” (figure 2). The former slaved away plugging actual numbers into their elegant equations, while the experimentalists fished out their postgrad notes on tensor analysis.

Black holes get a kicking

In 2007 numerical relativists found a surprise emerging from their simulations. Straightforward considerations of the mechanics of unequal-mass, inspiralling black holes suggested that, in order to conserve angular momentum, the gravitational radiation they produce will not be emitted equally in all directions. The implication was that the final black hole produced when the two bodies collide ought to have some linear momentum relative to the centre of mass: they will in effect be given a “kick”. But full simulations by Manuela Campanelli and colleagues at the Rochester Institute of Technology in New York, and then José González and co-workers at the University of Jena in Germany, showed that this momentum was far from small: the final black hole could have a speed of up to 4000 km s–1 for holes spinning in opposite directions. (Stars near our own Sun, in comparison, move at barely a few tens of kilometres per second.)

Recently, even higher speeds, or “superkicks”, have been found of up to 15,000 km s–1 , with some theorists suggesting that speeds three times higher still – or 15% of the speed of light – might be possible. Because such kicks would be greater than the escape velocity of any galaxy, the finding opens up the possibility of black holes living in galactic halos far away from their galactic nucleus, or perhaps even single, rogue black holes cannonballing through the universe. These holes would be largely invisible until they roamed through, say, the Oort cloud of comets that lies within about a light-year of the Sun, when it might be possible to detect them through tiny, mysterious shifts in the movement of comets or tiny planets. In the unlikely event that a rogue black hole should barrel through our solar system, we would quickly be relieved of all our earthly worries.

Less catastrophically, superkicks have implications for those searching for gravitational waves. Black holes ejected from globular clusters – collections of stars that orbit galactic cores as a satellite – would lower the subsequent merger rate for black holes remaining in the cluster, and so would reduce the number of gravitational waves expected at detectors. Large recoils would also remove high-velocity black holes, and could constrain how early in the universe small seed black holes would have merged into larger black holes.

A spectroscopy of the heavens

Numerical relativity has played a key role as well in the search for gravitational waves, even if the added complication of rogue black holes is probably the last thing that those involved need, given that detecting these tiny ripples is hard enough as it is. The problem is that although sources such as binary stars radiate enormous amounts of energy as gravitational waves – at rates of 1028 W or more – by the time those waves reach Earth, their deviation from flat space will alter the length of an interferometer’s arm by a factor of only 10–18 , or even less. Gravitational-wave interferometers, such as LIGO in the US, VIRGO in Italy, TAMA in Japan and GEO600 in Germany, must therefore detect tiny length differences when a gravitational wave washes over them.

Gravitational-wave hunters are especially interested in stellar-mass black holes and supermassive black holes because they produce waves at frequencies of 10–10,000 Hz when they merge – exactly the range that ground-based detectors such as LIGO are most sensitive to. But because the Earth is a shaky place, the number crunchers need some guidance as they try to distinguish the minuscule fluctuations of gravitational waves from seismic shifts and even passing trains. Knowing what waves to expect helps a great deal.

Towards this end, the Numerical Injection Analysis project (NINJA) was started in 2008, bringing together numerical-relativity groups and data-analysis teams from 30 institutions across the globe. Relativists provide waveform templates in the form of ASCII data files that specify their predictions for the time-varying weights of the waves when decomposed into spherical harmonics. These must cover the broad parameter ranges of black-hole mergers – mass ratios, spins and eccentricities – that are most likely to occur. Even the simple case of a binary black hole has 17 variables, or degrees of freedom, among the source and detector configurations.

But the methodology works. On 16 September 2010, for example, detector scientists were alerted to the arrival of a “chirp” signal only minutes after its arrival. After analysing it, members of the LIGO and VIRGO collaborations reported the discovery of gravitational waves, seemingly from a neutron star spiralling into a black hole. They even wrote up a paper about it – only to be told that the event was a “blind injection” of fake data put in by project insiders. The researchers had been told of such a possibility beforehand, and although their paper went unpublished, their techniques were validated, as was their vigilance.

While the results from numerical relativity have gone some way towards helping gravitational-wave researchers, they could play a still bigger role in upcoming missions, notably the Advanced LIGO facility – an upgrade to LIGO that will search a volume of space 1000 times bigger than the existing facility and is expected to begin science operations in 2015. The first-generation LIGO had about 10,000 expected waveforms in its database, while Advanced LIGO will have about 100,000. Comparing data to such a large number of possibilities is, needless to say, computationally intensive.

Indeed, in January the National Science Foundation awarded Syracuse University in the US almost $800,000 to build a supercomputer that will eventually have almost 500 terabytes of storage for just this purpose. “It’s the Advanced LIGO detectors that people are looking to really open up the field of gravitational-wave astronomy,” says Syracuse’s Duncan Brown, who is a member of the LIGO collaboration. Syracuse’s machine will be one of three such devices designed for the purpose, the others being at the University of Wisconsin–Milwaukee and at the Albert Einstein Institute for Gravitational Physics in Germany.

The details of gravitational waveforms depend on many factors. Relativists have studied systems that are more complex than binary black holes, such as a neutron star colliding with a black hole, or pairs of neutron stars, and recently have even moved on to inspiralling binaries with external magnetic fields and their surrounding plasmas, finding that these can lead to powerful jets that could be observed with X-ray telescopes. These interactions require the solution of the full Einstein equations coupled with hydrodynamic equations for the plasma, which in turn require an equation of state for the neutron star. Gravitational waves might therefore someday help us to distinguish between different models of neutron stars – a kind of “spectroscopy of the heavens”.

Adding yet another facet to the problem, Yuichiro Sekiguchi and other theorists from Kyoto University in Japan recently studied the behaviour of a neutron-star pair described by Einstein’s equations coupled with hydrodynamic equations, while incorporating the cooling of the final hypermassive neutron star by neutrino emission. They found both the gravitational-wave spectrum and the luminosity of neutrino emissions from the final star; the latter could be higher than even that seen in supernova explosions that already shine bright in ordinary heavenly light. Future astronomers will view all of these extreme events with three eyes: via gravitational waves, electromagnetic waves and neutrino bursts.

Scale me up

Picking out the best details will require a third generation of gravitational-wave detectors. With the existing LIGO detector, the gravitational waves of a binary neutron star are only in a detectable band for about 25 s (and about 1 s for a binary black-hole system). Advanced LIGO could detect a gravitational anomaly lasting about 1000 s, although this still only represents the last thousand seconds of a coalescence that has been billions of years in the making.

The future lies in scaling up. The proposed Laser Interferometer Space Antenna (LISA) system – three satellites that would be five million kilometres apart in a planetary-like orbit around the Sun – would see gravitational waves (in the band 0.1 mHz – 1 Hz) that could last hours, weeks or even months, out to redshifts of 5–10. Unfortunately, LISA’s realization is currently uncertain; NASA bowed out of the project this year and, although the European Space Agency said it might launch a smaller version, no decision has yet been made.

European researchers are, however, planning to build what is dubbed the Einstein Telescope – a gravitational-wave detector that would be built a few hundred metres below ground with two arms each a massive 10 km long. It would be 10 times more sensitive than even Advanced LIGO and able to access a million times the space-volume of current ground-based detectors. Although today’s best numerical simulations are good enough for the accuracy needed for such a detector, studying the entire 9D parameter space of even a black-hole binary without matter could take another decade.

Still, as with many breakthroughs, today’s new golden age of relativity is opening vast unexplored areas of physics, with many surprises surely to come. It might be almost 100 years since Einstein came up with his equations, but his gift is giving still. Today is a good time to be in the gravity business.

At a Glance: Numerical relativity

  • Einstein’s general theory of relativity describes the relationship between the energy and matter in a region of space and its geometry, and has passed all experimental tests to date
  • Unfortunately, Einstein’s equations are fiendishly complicated and can be solved exactly in just a handful of cases
  • Powerful supercomputers can, however, crunch through the equations using brute force
  • This approach, known as “numerical relativity”, has been used to study how black holes merge, showing that in some cases they might create rogue holes rushing through intergalactic space
  • Numerical relativity is also helping researchers seeking the signatures of gravitational waves

Black holes: the inside story

Amazingly, given their heft and their surly reputation, black holes are among the simplest objects in the universe and can be fully characterized by just three terms – their mass M, charge Q and angular momentum or “spin” J. Indeed, the Indian Nobel-prize-winning astrophysicist Subrahmanyan Chandrasekhar, who first predicted that they might be created when large stars die, called black holes “the most perfect macroscopic objects there are in the universe”. Black holes come in three main varieties:
1 solar-mass black holes, with masses about 3–30 times the mass of the Sun;
2 intermediate-mass black holes with about 100–10,000 solar masses, such as (almost all astronomers would now agree) the Hyper-Luminous X-ray source (HLX-1), which lies in a galaxy 290 million light-years from Earth;
3 supermassive black holes that lord over the centres of galaxies, with millions to billions of solar masses.

In terms of spin, at one extreme is a Schwarzschild black hole, which has zero spin, while an extreme Kerr black hole, carrying no charge, has the maximum spin allowed by general relativity of GM2/c, where G is the gravitational constant and c is the speed of light.

More about: Numerical relativity

J Centrella et al. 2010 Black-hole binaries, gravitational waves, and numerical relativity Rev. Mod. Phys. 82 3069
M Hannam 2009 Status of black-hole-binary simulations for gravitational-wave detection Class. Quant. Grav. 26 114001
D Merritt and M Milosavljevic 2005 Massive black hole binary evolution Living Rev. in Relativity 8 8
F Pretorius 2009 Binary Black Hole Coalescence, in Physics of Relativistic Objects in Compact Binaries: from Birth to Coalescence ed M Colpi et al. Astrophysics and Space Science Library vol 359 (New York, Springer)

Evaluations evaluated

Earlier this year, I canvassed readers on the best way to get the measure of potential PhD students (May column) . To get you thinking, I described the experiences of a Stony Brook colleague and myself in giving graduate-school applicants cleverly chosen physics problems – and then asking them not to solve each problem, but to explain the solution as if tutoring an undergraduate.

Demetris Charalambous, a former lecturer in mathematical physics at the University of Lancaster, UK, proposed one interesting challenge along these lines. Consider two identical metal spheres, one hanging from a piece of string and the other resting on a table. If you supply each with the same quantity of heat and ignore heat transfer to the table, string and so on, which sphere ends up hotter? Isn’t the student you want, he asked, the one who first spots the relevance of the centre of gravity?

Colin Pykett, now retired after having worked in several UK Ministry of Defence labs, remembered five challenges for prospective recruits. First, why is it that a mirror reverses your facial features but does not make you appear upside down? Second, why are there two tides a day? Third, if you are standing on a pavement flanked by a fence with vertical posts at regular intervals receding into the distance, what might happen if you clap your hands? Fourth, is the received power versus range law for a radar or sonar system inverse square and, if not, what is it and why? Finally, what are some differences between optical microscopy and X-ray crystallography?

Martin van Exter, from the Huygens Laboratory at the University of Leiden in the Netherlands, asks prospective students to explain Maxwell’s equations. He does not expect a treatise, just basic lines of reasoning, such as that the divergence of the electric (E) field is linked to the charge (or charge density), that the divergence of the magnetic (B) field is zero (because there can be no monopoles), that electric current acts as a generator of magnetic field and that the E and B fields are linked via time and space derivatives (thus containing the speed of light).

Peter Haynes, an admissions tutor at the doctoral-training centre in theory and simulation of materials at Imperial College London, says his colleagues prefer not to ambush applicants. Instead, in advance they send each candidate three straightforward problems – a 1D heat-diffusion equation, a 1D particle in a box and a cantilever bending under its own weight – to be explained at the interview. “We found that advance warning was vital to enable us to get to the stage where we could ask probing questions on the timescale of an interview,” says Haynes. He notes that some candidates treat the problems as purely algebraic exercises – struggling when asked how to interpret the final mathematical expression physically – while others write down equations they have memorized. The best students, in Haynes’ view, focus on the relevant physical principles and then express these mathematically. “They can readily sketch solutions and know how to interpret results,” he says.

Other means

Darryl Holm, also at Imperial, objected to the challenge-problem approach, noting that it was “exploitative” to treat PhD applicants as “cheap undergraduate instructors”. He has two alternative methods. One is to ask prospective PhD students what they love about science and maths. The answers, he says, make it easy to separate the passionate from the diffident. Another is to show them a rattleback – a kind of top that refuses to spin in one direction and can change its rotation to a preferred direction. “If, when they see it unexpectedly reverse spin, they start laughing, or say ‘Oh shit!’ or are otherwise energized, I know I am on the right track,” he says. The blasé or confused ones drop to the bottom of the list.

Harvey Buckmaster, an adjunct physics professor at the University of Victoria in Canada, says in three decades of experience with graduate students one good marker is their extra-curricular activities. “Those with significant interest and involvement in cultural activities have done more significant research and wrote better theses,” he says, adding that creativity in the arts is “no different from that in the sciences”.

The critical point

George Hart of the Museum of Mathematics, which is set to open next year in New York City, gave me a good challenge problem in person that might be asked of maths students: “Why does a negative times a negative equal a positive?” he asked.

I had no idea.

Hart, who is a former colleague of mine at Stony Brook, said the answer can be explained simply as a consequence of the distributive law. He hunted for pencil and paper, jotted down 1 + –1 = 0, multiplied both sides by –1, applied the distributive law a(b + c) = ab + ac, rearranged terms and ended with –1 × –1 = 1. He said another way was to use scaling, and drew a number line. Multiplication, he explained, “scales” numbers from the 0 point: multiplying by 2, for instance, doubles the distance from 0 in either direction, while multiplying by a negative number flips a distance from 0 by 180°. “A negative times a negative is thus a double 180° transformation,” concluded Hart.

I asked if a prospective maths PhD would know this. “Sure!” he said. “Mathematics isn’t a set of arbitrary definitions. Each definition is bound up with other parts in an architecture – it has consequences. Even an undergraduate, if really interested in mathematics, will have thought about this issue enough to have some way of describing how it fits in that architecture.”

Physics, of course, has a similar structure – its different laws and definitions, too, are bound up in a fabric. That is what makes the challenge-problem approach effective. A student’s promise is not measured best by the ability to memorize formulae, but by how well they understand this fabric, and know how to fit even simple problems into it.

Quasicrystal discovery bags 2011 chemistry Nobel

 

The 2011 Nobel Prize for Chemistry has been awarded to Dan Shechtman from Technion – Israel institute of Technology for his discovery of quasicrystals – materials that have ordered but not periodic structures. Shechtman’s discovery, which he made in 1984 while studying a sample of aluminium manganese, generated huge excitement, confusion and significant opposition. The Journal of Applied Physics, for example, rejected Shechtman’s original paper detailing the discovery on the grounds that it would not interest the physicists who read the journal. Linus Pauling – a giant of 20th-century crystallography – also dismissed the findings.

Before Shechtman’s discovery, most researchers thought that long-range order in physical systems was impossible without periodicity. Atoms were believed to be packed inside crystals in symmetrical patterns that were repeated periodically over and over again – and that this repetition was required to obtain a crystal. However, Shechtman found that the atoms in his crystal were packed in a pattern that could not be repeated and yet had “10-fold” rotational symmetry.

A system is said to possess n-fold rotational symmetry if it looks the same after it has been rotated through 360/n degrees, which meant that Shechtman’s sample was unchanged after being rotated through 36 degrees. Before his discovery, a periodic system was only supposed to have either 1-, 2-, 3-, 4- or 6-fold rotational symmetry, with anything else forbidden by the laws of crystallography.

Since Shechtman’s breakthrough, however, hundreds of different quasicrystals have been found, including icosahedral quasicrystals that have 2-fold, 3-fold and 5-fold rotational symmetry. There are also octagonal (8-fold), decagonal (10-fold) and dodecagonal (12-fold) quasicrystals that exhibit “forbidden” rotational symmetries within 2D atomic layers but that are periodic in the direction perpendicular to these layers.

In awarding the prize, the Royal Swedish Academy of Sciences says in a statement that Shechtman “had to fight a fierce battle against established science” to have his finding accepted as “the configuration found in quasicrystals was considered impossible”. It adds that this year’s Nobel prize has “fundamentally altered how chemists conceive of solid matter”.

Order from disorder

Shechtman announced his controversial discovery while on sabbatical in the US at the National Bureau of Standards in Washington, DC, where he was investigating the properties of mixtures of metals that had been melted together and rapidly cooled. Opposition to his finding was fierce, with Pauling, for example, suggesting that the observed diffraction pattern was caused by five crystals rotated by 72 degrees relative to one other, rather than being caused by just one crystal with 10-fold symmetry.

But these early doubts were soon swept away by new experimental evidence, and Shechtman’s paper – which was finally published in Physical Review Letters in November 1984 – has since become one of the most-cited research articles in the scientific literature.

Indeed, quasicrystals have led to important discoveries in disciplines as diverse as nanoscience and supramolecular chemistry. Photonic “metamaterials” based on quasicrystals may one day even replace semiconductor devices to make all-optical circuits for communication and information technologies, while quasiperiodic arrays of electronic spins could reveal new aspects of magnetism for spintronics applications.

The “right decision”

Before Shechtman’s discovery, mathematicians were well aware that some functions had the property of being “almost periodic” and that the mathematical basis of this “aperiodicity” had been outlined in 1933 by Harald Bohr (the brother of Niels Bohr). Indeed, quasiperiodic functions are a subset of the family of almost-periodic functions, with the most famous quasiperiodic pattern being Penrose tiling, which was discovered by Roger Penrose of Oxford University in 1974. Penrose tiling is not periodic, since sliding an exact copy of the pattern around will never produce an exact match.

Physicist Rónán McGrath from the University of Liverpool in the UK, who has studied quasicrystals for the last 12 years, says Shechtman’s award is “well deserved” and the right decision, even though the term “quasicrystal” was actually coined by the theorists Paul Steinhardt and Dov Levine at the University of Pennsylvania in the US. “Shechtman persisted in believing what he had was genuine,” says McGrath. “He managed to convince the community that he was correct all along. It is right that Shechtman alone is awarded.”

Renee Diehl from Penn State University in the US agrees that the prize is well deserved. “Shechtman was very astute to recognize that he had discovered a new form of crystalline matter,” he says. “This discovery completely changed how we think of crystalline matter and even necessitated a new definition for the term ‘crystal’.”

Born in 1941 in Tel Aviv, Shechtman graduated from the Technion in 1966 with a degree in mechanical engineering and then completed a PhD in materials engineering at the institute in 1972. After a postdoc in the US at the Aerospace Research Laboratories, Ohio, he returned to the Technion in 1975 where he has worked ever since. He was also awarded the Wolf Prize for Physics in 1999.

Physics Nobel will attract controversy

By Hamish Johnston

Assigning credit for a scientific discovery is never easy, especially when two rival, interacting teams of scientists are involved. That is exactly the problem that the Nobel committee must have grappled with before awarding this year’s physics prize to Saul Perlmutter, Adam Riess and Brian Schmidt.

Perlmutter led the Supernova Cosmology Project, while Schmidt and Riess were involved with the High-Z Supernovae programme. Both groups came to the surprising conclusion in 1998 that the rate of expansion of the universe is increasing, not decreasing as had been thought. So a shared prize seems fair enough.

Or is it? In 2007 Bob Crease wrote an extensive article about the same discovery that proved controversial – to say the least. Some members from both teams had been particularly worried about Crease’s article, which went through more than 20 drafts.

At issue was the fact that the teams were rivals using different techniques – as well as the question of who reported and published their work first. What Bob’s article reveals is how deeply scientific progress is indebted to ambition, desire, pride, rivalry, suspicion and other perfectly ordinary human passions.

You can read the article here, and I would also recommend looking at the comments that follow.

Also, let us know what you think by voting in our Facebook poll, where the question is:

Has the 2011 Nobel Prize for Physics for “the discovery of the accelerating expansion of the universe” gone to the right people?

Dark-energy pioneers scoop Nobel prize

The 2011 Nobel Prize for Physics has been awarded to Saul Perlmutter from the Lawrence Berkeley National Laboratory, US, Adam Riess at Johns Hopkins University, in Baltimore, and Brian Schmidt from the Australian National University, Weston Creek, “for the discovery of the accelerating expansion of the universe through observations of distant supernovae”.

Perlmutter has been awarded a half of the SEK10m (£934,000) prize, with Riess and Schmidt sharing the other half. In a statement, the Royal Swedish Academy of Sciences said “For almost a century, the universe has been known to be expanding as a consequence of the Big Bang about 14 billion years ago. However, the discovery that this expansion is accelerating is astounding. If the expansion will continue to speed up the universe will end in ice.”

Going against gravity

Only 25 years ago most scientists believed that the universe could be described by Albert Einstein and Willem de Sitter’s simple and elegant model from 1932 in which gravity is gradually slowing down the expansion of space.

From the mid-1980s, however, a remarkable series of observations was made that did not seem to fit the standard theory, leading some people to suggest that an old and discredited term from Einstein’s general theory of relativity – the “cosmological constant” or “lambda” – should be brought back to explain the data.

This constant had originally been introduced by Einstein in 1917 to counteract the attractive pull of gravity, because he believed the universe to be static. He considered it a property of space itself, but it can also be interpreted as a form of energy that uniformly fills all of space; if lambda is greater than zero, the uniform energy has negative pressure and creates a bizarre, repulsive form of gravity. However, Einstein grew disillusioned with the term and finally abandoned it in 1931 after Edwin Hubble and Milton Humason discovered that the universe is expanding.

In 1987 physicists at the Lawrence Berkeley National Laboratory and the University of California at Berkeley initiated the Supernova Cosmology Project (SCP) to hunt for certain distant exploding stars, known as type Ia supernovae. They hoped to use these stars to calculate, among other things, the rate at which the expansion of the universe was slowing down.

Deceleration was expected because in the absence of lambda, many people thought that “ΩM”, which is the amount of observable matter in the universe today as a fraction of the critical density, was sufficient to slow the universe’s expansion forever, if not to bring it to an eventual halt.

In 1998, after years of observations, two rival groups of supernova hunters – the High-Z Supernovae Search Team led by Schmidt and Riess and the SCP led by Perlmutter – came to the conclusion that the cosmic expansion is actually accelerating and not slowing under the influence of gravity as might be expected.

The two teams came to this conclusion by studying type Ia supernova where they found that the light from over 50 distant supernovae was weaker than expected. This was a sign that the expansion of the universe was accelerating.

In order to account for the acceleration, about 75% of the mass-energy content of the universe had to be made up of some gravitationally repulsive substance that nobody had ever seen before. This substance, which would determine the fate of the universe, was dubbed dark energy.

It is now thought that dark energy constitutes around 75% of the current universe, with around 21% being dark matter and the rest ordinary matter and energy making up the Earth, planets and stars.

“The findings of the 2011 Nobel Laureates in Physics have helped to unveil a universe that to a large extent is unknown to science,” stated the Academy. “And everything is possible again.”

“My involvement in the discovery of the accelerating universe and its implications for the presence of dark energy has been an incredibly exciting adventure,” says Riess. “I have also been fortunate to work with tremendous colleagues and powerful facilities. I am deeply honored that this work has been recognized.”

New problems

Cosmologist Michael Turner from the University of Chicago says that the award to Perlmutter, Riess and Schmidt is “well deserved”. “The two competing teams is a wonderful story in science – the physicists vs the astronomers,” says Turner. “The biggest surprise to both teams was that the other team got the same answer. Each team believed the other didn’t know what they were doing.”

Turner adds that before the discovery, cosmology was in some disarray with astronomers having a model of the universe based on cold dark matter and inflation, but with not enough matter to make the universe flat – a key prediction of inflation.

“Dark energy and cosmic acceleration was the missing piece of the puzzle,” says Turner. “Moreover, in solving one problem, it gave us a new problem – what is dark energy? I think that is the most profound mystery in all of science.”

Robert Kirshner from Harvard University who supervised both Schmidt and Riess when they were PhD students says the decision by the Nobel committee is “great” as it will mean “no more waiting”. “We did a lot of foundational work at Harvard and my postdocs and students made up a hefty chunk of the High-Z Team,” says Kirshner. “[Riess] did a lot after the initial result to show that there was no sneaky effect due to dust absorption and that, if you look far enough into the past, you could see that the universe was slowing down before the dark energy got the upper hand, about five billion years ago.”

Kirshner adds that Perlmutter is also “very deserving” of the prize. “[Perlmutter] was persistent even when his programme was moving slowly and, despite getting a contrary result in 1997, was convinced of cosmic acceleration during 1998 by comparing his own extensive data set of distant supernovae with the nearby supernovae measured by the group in Chile.”

Peter Knight, president of the Institute of Physics, which publishes physicsworld.com says the work has “triggered an enormous amount of research” on the nature of dark energy. “These researchers have opened our eyes to the true nature of our universe. They are very well-deserved recipients,” says Knight.

Leading lights

Born in Champaign-Urbana, Illinois, in 1959, Perlmutter graduated from Harvard University in 1981 receiving his PhD from the University of California, Berkeley in 1986 where he worked on robotic methods of searching nearby supernovae. He then moved to the Lawrence Berkeley National Laboratory and the University of California, Berkeley. Perlmutter now heads the SCP based at Lawrence Berkeley National Laboratory.

Schmidt was born in Missoula, Montana, in 1967. He graduated from the University of Arizona in 1989 and received his PhD from Harvard University in 1993 on using type II Supernovae to measure the Hubble Constant. During postdocs at Harvard, Schmidt, together with Nicholas Suntzeff from the Cerro Tololo Inter-American Observatory in Chile, formed the High-Z Supernovae Search Team. In 1993 Schmidt then went to the Harvard-Smithsonian Center for Astrophysics for a year before moving to the Australian National University where he is currently based.

Riess is also a former member of the High-Z Supernovae Search Team where he lead the 1998 study that reported evidence that the universe’s expansion rate is now accelerating. He was born in Washington, D.C in 1969 and graduated from The Massachusetts Institute of Technology in 1992. Riess received his PhD from Harvard University in 1996 researching ways to make type Ia supernovae into accurate distance indicators. In 1999 he moved to the Space Telescope Science Institute at Johns Hopkins University.

How to spot a multiverse

How can we tell if another universe has collided with our own? Physicists in Canada and the US believe they have the answer – it would leave “a unique and highly characteristic” imprint in the microwave background that pervades the cosmos. The physicists claim that the prediction can be tested using existing and future space telescopes, which contradicts a widespread view that the existence of a multiverse is untestable.

Chuck Bennett, an astrophysicist at Johns Hopkins University in Maryland, US, who was not involved with the study, believes the prediction helps bring multiverse theory into the realms of conventional, falsifiable science. “Science relies on being able to falsify ideas through experiment or observations of nature,” he says. “The fact that these potentialities exist enables us to call this ‘science’. That, to me, is a significant statement.”

The possibility of a multiverse comes from both string theory and inflation theory, the idea that our universe underwent a rapid expansion just after the Big Bang. Inflation theory does a good job of explaining why space is fairly smooth on large scales, but researchers can’t explain what started the expansion and what stopped it. These problems have led physicists to consider the possibility that inflation could occur at other places and times, generating new universes in addition to our own.

Metaphysical problem

The idea of a multiverse is highly controversial. One problem is metaphysical: the universe seems big already, without having to contend with a potentially infinite number of others. Yet perhaps a bigger problem is scientific. If observations are limited to our own observable universe, how can scientists test whether a bigger multiverse exists? The answer to that has been that, from time to time, another universe in the multiverse might collide through ours, leaving a “wake” in its path. But figuring out precisely what such a wake would look like hasn’t been easy.

Now, however, Kris Sigurdson of the University of British Columbia in Vancouver and others say they have calculated the detailed features of a cosmic wake. They have considered the possibility that our universe collided with another before our inflation period, because, they say, the latter would have erased the wake’s evidence. Even though this happened more than 13 billion years ago, the wake would have been preserved in the cosmic microwave background (CMB), which was formed some 380,000 years into the universe’s existence.

Look for a ‘double peak’

The focus of the prediction is in the polarization of photons in the CMB. Photons have two transverse polarization states, and any that come from a certain region in the CMB might be mostly in the same polarization state, or in a mix of both. Sigurdson and colleagues calculate that, providing the wake was big enough, it ought to imprint the CMB with a characteristic “double peak”: two close rings where the photons sway towards a single polarization state.

The prediction is not strictly the first to arise from multiverse theory. In 2007 researchers at the University of California at Santa Cruz, US, also suggested that a cosmic wake could imprint itself on the CMB; then, earlier this year, a group led by Hiranya Peiris of University College London found hints that this prediction was true. But these predicted features were too vague, say Sigurdson and colleagues, and might have existed in the CMB anyway.

Evidence for string theory?

“[Our] features represent the first verifiable prediction of the multiverse paradigm,” write Sigurdson and colleagues in their preprint, which they uploaded to the arXiv server last month. “A detection of a bubble collision would confirm the existence of the multiverse, provide compelling evidence for the string theory landscape, and sharpen out picture of the universe and its origins.” Physics World was unable to speak to the researchers about their preprint because they are submitting it to a journal that employs an embargo policy.

If the prediction is correct, it should be possible to test it in upcoming data from the European Space Agency’s Planck space observatory and future CMB missions, say the researchers. Yet Bennett, the principal investigator on NASA’s Wilkinson Microwave Anisotropy Probe, another CMB space observatory, thinks the detection of a cosmic wake would nonetheless be “extremely unlikely”. He says the amplitude of a wake would have to be just right: too small and we wouldn’t see it; too big and it would probably have had severe consequences for our universe’s structure. The number of collisions would also have to be “fine-tuned”, he says.

Infinite number of wakes

“The claim seems to be that we might see one or two wakes in our sky, but why one or two?” he adds. “Why not none or an infinite number? In fact, if bubble collisions were common we would not be alive to discuss the question.”

Cosmologist Arjun Berera at the University of Edinburgh, UK, also thinks the idea of a multiverse – and by extension Sigurdson and colleagues’ prediction – is speculative. But he notes that a positive detection would be “spectacular”. “Such a case would offer suggestive evidence in support of string theory,” he says. “On the other hand, no evidence in the CMB data for a collision between two universes would not rule out string theory, it would simply extend the widely held belief in the field that string theory is unfalsifiable.”

The research is described in arXiv:1109.3473.

Nobel topics: the people's choice

nobel.jpg
By Hamish Johnston

It’s all hands to the pumps here at physicsworld.com HQ in the run up to the physics Nobel prize announcement, which will be made this morning at (or after) 10.30 a.m. BST.

Last week we asked our Facebook followers what field of physics they thought this year’s prize would honour – and now I can reveal the results.

Nearly half of you thought that this year’s prize will go for quantum information. While it’s tough to single out three people who should be awarded the prize, I would think Anton Zeilinger, Dave Wineland and Alain Aspect would be in the running.

In second place with about 29% of the vote is neutrino oscillations, which would put my fellow countryman Art McDonald of SNOLAB in the running along with two researchers from the Super-Kamiokande experiment in Japan.

Less than two hours to go…

Watt set for £50 note

£50banknote.jpeg
By Matin Durrani

Quiz question: name a scientist who has appeared on a banknote.

Thanks to the powers of Google (other search engines exist) and this informative but possibly out-of-date webpage from University of Maryland physicist Edward Redish, I see that those who have graced various currencies include Bohr (Danish 500 kroner), Marie and Pierre Curie (French 500 franc), Einstein (Israeli five pound note), Kelvin (Scottish pound), Marconi (Italian 2000 lira), Rutherford (100 New Zealand dollar), Schrödinger (Austrian 1000 schilling), Tesla (er, 10 billion Yugoslav dinar) and Volta (Italian 10,000 lira).

Now, a decade after Michael Faraday was ditched in favour of Edward Elgar on the Bank of England’s £20 note, science makes a reappearance in England with James Watt set to appear alongside his Birmingham-based business partner Matthew Boulton on the bank’s new £50 note, which is to enter circulation on 2 November 2011 (see above).

Born in Scotland in 1736, you don’t need me to remind you that Watt made his name by designing a new kind of more efficient and powerful steam engine, which he commercialized with Boulton (1728–1809). Their invention pretty much kick-started the industrial revolution, offering as it did cheap quantities of power. Watt, of course, is also honoured through the SI “derived unit” of power.

Boulton and Watt were both fellows of the Royal Society, prompting current president Sir Paul Nurse to call it “wonderful” that they were being celebrated in this way. “Science and engineering have long driven improvements in our knowledge and in our day to day lives,” he added. “At a time when the UK is trying to rebalance its economy, Watt and Boulton are also a reminder of how science and engineering can be the basis of economic growth for the UK.”

Sadly I haven’t actually got one of the lovely new notes to describe in glowing detail what it looks like, so if anyone from the Bank of England would care to supply one, I’d be delighted.

Active galactic nuclei measure the universe

A common type of active galactic nuclei (AGN) could be used as an accurate “standard candle” for measuring cosmic distances – according to astronomers in Denmark and Australia. AGNs are some of the brightest objects in the visible universe and the technique could allow astronomers to determine much larger distances than is possible with current techniques, the scientists say.

Standard candles are distant objects with known brightness that give astronomers a very accurate measure of cosmic distances – the dimmer the candle appears to us, the farther away it must be. Studying these candles is crucial to our understanding of the age and energy density of the universe. Indeed, the use of supernovae and Cepheids as standard candles turned our understanding of the cosmos on its head through the discovery of the acceleration of the expansion of the universe and the introduction of dark energy.

‘Reverberation mapping’

However, reliable measurements of distances greater than redshift of about 1.7 are beyond the current capabilities of known standard candles. Now, Darach Watson and colleagues at the University of Copenhagen and the University of Queensland have shown that a tight relationship between the luminosity of an AGN and the radius of its “broad-line region” can be used to measure cosmic distances. The radius is found using “reverberation mapping”, an established technique for studying the inner structure of AGNs, to gauge their mass. However, until this latest work, the method had not been considered in the search for new standard candles.

According to Copenhagen astronomer Kelly Denney, the approach works using type-1 AGNs – those with broad-line emissions in the visible spectrum. These objects have a dense area of gas and dust surrounding the black hole called the broad-line region. The region is so-called because light emitted by the gas has much broader line widths than light from most other astronomical sources.

Heart of the matter

Much closer to the black hole is the accretion disc where matter falling into the black hole collects, causing a great deal of light to be produced. As this light travels outwards, it ionizes gas in the broad-line region, causing it to emit light with the distinct broad line widths because the gas is moving at many thousands of kilometres per second due to the gravity of the black hole, and the Doppler shifts associated with this motion causes the broadening. However, the amount of light produced in the accretion disc is not constant. By carefully comparing the time at which the light is emitted from the accretion disc and the time at which the ionized light is re-emitted from the broad-line region, astronomers can measure a time lag between the light arriving from the two sources. This delay is proportional to the radius of the broad-line region divided by the speed of light. This radius correlates tightly with the luminosity of the AGN. The luminosity in turn is used to calculate the distance because they are inversely related.

The technique, however, is difficult and it wasn’t until 2009 that Denney – then working with Bradley Peterson’s group at Ohio State University – vastly improved the accuracy of the data from the radius-luminosity relationship such that it would allow a precise distance to be calculated. When Darach Watson came across the result, he wondered why this was not being used as a distance indicator already. “The simple answer was ‘Huh, well, I don’t know!’ Everyone in the AGN community typically wants to know why no-one has thought of this before!” said Denney.

Candle in the wind

To confirm the technique’s ability to give the distance of an AGN, Watson and colleagues looked at a sample of 38 AGNs at known distances. They found that reverberation mapping gave a reasonable estimate of the distance to the AGNs. Kenney quipped, “This almost makes the notion of AGNs as standard candles an oxymoron, since it’s their variability that makes the method work!”

Currently, the AGN technique is not as reliable as those based on Cepheids or supernovae. However, unlike a supernova – which lasts for a relatively short time – an AGN can be observed over long periods, reducing observational uncertainties. Also, AGNs exist at all redshifts, so astronomers can pick and choose which ones to study.

In the coming months, the researchers aim to reduce the scatter in their current data and work on higher redshift reverberation mapping experiments. “One drawback of the method is that, due to time-dilation effects, the monitoring time required to measure time delays can become very long, especially for high-redshift sources. We are investigating ways to reduce this time, such as working in the UV, where the time delays are shorter.” says Denney.

A preprint of a paper about the work is available on arXiv.

Between the lines

Exoplanet extravaganza

In March 2009 an article in Physics World (“Brave new worlds”) noted that scientists had discovered more than 300 planets outside our own solar system. By the end of that year, the number of confirmed exoplanets had exceeded 400; as this review is written, the tally stands at 669; by the time you read it, the number will be higher still. Such rapid progress means that any book about exoplanets will quickly become out of date, but the shelf-life of Strange New Worlds should be longer than most. The main reason is that its author, Ray Jayawardhana, is an exoplanet insider. In the late 1990s he was among the first scientists to image dusty protoplanetary discs around young, far-off stars; more recently, as an astronomer at the University of Toronto, Canada, he has written extensively about the field for a general-science readership. Jayawardhana puts all this to good use, sprinkling his book with accounts of conversations with other astronomers. In the wrong hands, this could degenerate into a clumsy citation-fest, but Jayawardhana gets the balance right: readers learn about a few of the people behind the research, but are not distracted by a new name every other sentence. It is worth remembering that the field was once considered a graveyard for promising astronomy careers. One cautionary example in the book is the US astronomer Thomas Jefferson Jackson See, who suggested in 1895 that he had discovered a planet in the 70 Ophiuchi binary system. When two others published work contradicting his claim, he wrote such a vitriolic letter of complaint to the Astrophysical Journal that its editor permanently banned him from its pages. He went on to have a nervous breakdown and finished his career as a relentless critic of Einstein’s theory of relativity. Thankfully, the current generation of exoplanet astronomers has fared better; the field underwent a complete turnabout in the mid-1990s, when a flurry of confirmed planets transformed its reputation. Strange New Worlds offers an excellent introduction to these successes, as well as insights into the field’s future.

  • 2011 Princeton University Press £16.95/$24.95hb 288pp

Mathematical thoughts

Every weekday morning, the flagship news programme Today on BBC Radio 4 includes a brief segment called “Thought for the Day”, in which religious leaders from different faiths deliver mini-sermons on current events. In recent years, some secularists have demanded that the programme offer an equivalent slot to non-religious commentators. So far, the Beeb has not agreed, but should it ever do so, Göran Grimvall’s Quantify!: a Crash Course in Smart Thinking would make an excellent source for these “secular sermons”. The book is divided into about 80 short essays, loosely grouped by theme and mathematical content, and there is much fascinating material here – including an explanation of why Ohm’s law is not a law at all, and why holding the Olympics at high altitude helps triple-jumpers more than shot-putters. An emeritus professor of physics at Sweden’s Royal Institute of Technology, Grimvall is also a good writer, and his clear, gentle prose renders his book an almost effortless read. Unfortunately, the same brevity that makes his essays easy to digest also gives them an annoying tendency to end just when they seem about to shift up a gear. Grimvall’s discourse on exponentials and doubling, for example, includes a topical reference to pyramid schemes and the disgraced financier Bernard Madoff. Rather than digging into the mathematics behind Madoff’s con, however, he merely observes that “pyramid schemes are illegal in many countries, but the case of [Madoff] shows that people may never learn”. With its often frustrating lack of depth, the book actually shares one of the faults that religious critics have ascribed to “Thought for the Day”: namely, that the sermons are too short and innocuous to make much of an impact. The comparison is a harsh one, for there is much to like about Quantify! Still, we cannot help wishing that Grimvall had chosen fewer topics, and probed them a bit deeper.

  • 2010 Johns Hopkins University Press £13.00/$25.00pb 232pp
Copyright © 2026 by IOP Publishing Ltd and individual contributors