Skip to main content

Spot the difference

Picture-4.png
Can you tell the difference? (credit: US/LHC blog)

By Michael Banks

At first glance it looks like an average webpage from the arXiv preprint server – a website where researchers upload their papers before publishing them in a scientific journal.

But with article authors including “C H Fermi”, “S C Boltzmann”, or “L Heisenberg” you could be somewhat suspicious whether it is indeed authentic.

The website is snarXiv and has been created by David Simmons-Duffin, a PhD student in high-energy physics at Harvard University. It randomly generates titles and abstracts in high-energy physics taking into account the latest trends in the subject and presents them in an identical way as the arXiv server does.

Simmons-Duffin writes on his blog that he does not remember exactly why he decided to set up the website. However, he claims that it does serve some purpose.

For example, Simmons-Duffin notes that if you are a graduate student you can “gloomily read through the abstracts, thinking to yourself that you do not understand papers on the real arXiv any better”. And if you are a post-doc then you can keep reloading the webpage “until you find something to work on”.

Simmons-Duffin has even made a game where you have to spot the real title from the randomly generated one (the real one being a title from an arXiv paper and the random one a title from a snarXiv paper).

Try it for yourself. I managed to get 5 out of 8 correct, which ranked me rather unkindly as an “undergraduate”. (Other ranks include “better than a monkey” or “worse than a monkey” and it seems the top rank is “Nobel prizewinner”.)

A liberal sprinkling of quantum dots

Researchers at Rice University claim to have found a way to add electronic and optical elements to the “wonder material” graphene without sacrificing its mechanical properties. They have designed a technique for patterning the hydrogenated form of graphene with quantum dots – a phenomenon that promises many novel applications.

Graphene consists of 2D sheets of carbon just one atom thick arranged in a honeycombed lattice. Many scientists believe that the material could replace silicon as the electronic material of choice in the future thanks to its unique electronic and mechanical properties that would allow smaller devices to be made. Graphane is formed by simply adding hydrogen atoms to both sides of the graphene matrix. This material is an insulator, while graphene behaves like a metal.

Boris Yakobson and colleagues have found that removing hydrogen atoms from 2D sheets of graphane opens up tiny spaces in pure graphene that behave like quantum dots. Quantum dots are nanosized pieces of semiconductor in which electrons (or holes) are confined in 3D and their electronic properties can be controlled by changing the size of the dots. Quantum dots could be used to make nanoelectronic and optical circuits and devices, such as chemical sensors, solar cells and even semiconducting lasers, because they interact with light and magnetic fields in unique ways.

Phase transition

This phase transformation from graphene to graphane, accompanied by the change from metal to insulator, offers exciting opportunities for nanoengineers, says Yakobson. “If experimentally feasible, we can move from the labs to hopefully industrial tests and produce very small stable circuits and devices,” he told physicsworld.com. He says that the new technique has clear advantages over existing methods to create graphene-based electronic devices such a cutting nanoribbons and then re-assembling them, which can be a very intricate procedure.

The researchers also discovered that when they removed pieces of the hydrogen sub-lattice, the area left behind tended to be hexagonal in shape with a sharp interface between the graphene and graphane. This means that each dot is self-contained and that little charge leaks across from the graphene quantum dots into the graphane host material. Although the Rice scientists do not yet know how to physically create arrays of quantum dots in sheets of graphane, they believe the obstacle “shouldn’t be insurmountable”.

Before this can happen though, the team says it needs to better understand distortions at the graphane-graphene interface and also if there is likely to be frustration there, disordered hydrogen atoms and possible roughness at the border lines. The researchers also need to find out whether the interface remains robust when placed on a support substrate, such as silicon.

The current work was reported in ACS Nano.

Giant hole opens in Guatemala

chair.jpg
Courtesy: Guatemalan Government

By James Dacey

For me it is the sheer precision that is so astonishing.

This image shows a 60 m “sinkhole” that opened up on Sunday in Guatemala City, a result of the tropical storm Agatha that has been bombarding Central America.

The phenomenon is a common feature of karst landscapes, common in all continents except Antarctica. The bedrock in these zones is usually formed of carbonates such as limestone, which are highly prone to chemical weathering and dissolution. Sinkholes can result when underground cavities can no longer support the overlying sediment, and they can be triggered by even a small amount of rainfall.

The opening of this sinkhole is not reported to have killed anyone, unlike a separate hole in the same area that killed 3 people back in 2007.

Double celebration for neutrino lab

Particle physicists at the Gran Sasso laboratory in Italy have two reasons to celebrate. One is the first ever detection, by the OPERA experiment, of a neutrino that has mutated from another kind of neutrino as it travelled through space. The second achievement is the start-up of the ICARUS detector, which, like OPERA, will study neutrinos that have “oscillated” on their journey from the CERN laboratory outside Geneva in Switzerland.

Neutrinos are chargeless fundamental particles that come in three varieties, or “flavours” – electron, muon and tau. In the 1950s the Italian physicist Bruno Pontecorvo predicted that neutrinos should change, or oscillate, from one flavour to another as they travel through space, a property that would imply neutrinos have mass – in contradiction with the basic formulation of the Standard Model of particle physics. This idea was subsequently supported by experiments that found the Sun to be producing fewer electron neutrinos than had been expected, and by later experiments that detected a shortfall in muon neutrinos produced by cosmic rays interacting in the Earth’s atmosphere.

In these experiments, the phenomenon of oscillation is only inferred indirectly. A reduction in the number of neutrinos from their source to their detection is taken to mean that some of these particles have oscillated to a different flavour of neutrino that cannot be picked up by the detector.

A different approach

OPERA and ICARUS, which are located in the laboratory of the Italian National Institute for Nuclear Physics some 1400 m under the surface of the Gran Sasso mountain in central Italy, take a different approach. Both detectors are designed to make a positive sighting of the tau neutrinos that theory predicts will result from the oscillation of some of the muon neutrinos contained within a beam that is produced at CERN and fired 730 km through the Earth to Gran Sasso. Although physicists believe that the different absence measurements, taken together, constitute very strong evidence for neutrino oscillation, the tau sightings would rule out the slim possibility that the disappearing muon neutrinos are instead decaying or disappearing off into higher dimensions.

The 1250 tonne OPERA instrument, built by a collaboration of around 170 physicists from 12 countries, detects neutrinos using 150,000 ”bricks”, with each brick consisting of many alternating layers of lead and films of nuclear emulsion. These bricks record the tracks of the decay products that result from the interaction of neutrinos with the lead nuclei, each neutrino generating tracks with a distinctive shape. Tau neutrinos produce a charged particle known as a tau lepton, which then decays into a muon, hadron or electron, generating a very short track with a distinctive kink in it.

Weak interaction

The fact that neutrinos interact extremely weakly with normal matter means that only a tiny fraction of the billions of neutrinos in the CERN beam that pass through OPERA every second will leave their mark in the detector. Since it started up in 2006, the experiment has detected a few thousand muon neutrinos but it was not until August 22 last year that it detected its first tau neutrino – on account of the delicate measurement. The researchers say to a confidence of 98% that their signal is due to a tau neutrino. According to OPERA spokesman Antonio Ereditato of the University of Bern in Switzerland, the unambiguous observation of a tau neutrino will require several more events like the one recorded so far. “This might require a few more years,” he says, “but physicists are patient”.

The other news at the Gran Sasso is the opening of another neutrino detector, ICARUS, which uses quite a different detection technique. First proposed in 1977 by Carlo Rubbia, who would go on to share the Nobel prize in 1984 for the discovery of W and Z bosons, this involves filling a tank with a large amount of liquid argon, lining the walls of the tank with planes of wires and then setting up a large potential difference across the tank. Any charged particles passing through the tank create pairs of positively charged ions and electrons as they travel, with those electrons that do not recombine then drifting towards the wire planes where they register a signal. The spatial sequence of signals recreates the path of the charged particles and reveals whether or not those particles were produced by a tau neutrino. ICARUS recorded its first events on 27 May.

You’re so predictable

When it comes to the actions of our fellow humans, the sequence of events we witness on a daily basis appears to be just as mysterious and confusing as the motion of the stars seemed in the 15th century. At other times, although we are free to make our own decisions, much of our life seems to be on autopilot. Our society goes from times of plenty to times of want, from war to peace and back to war again. It makes one wonder whether humans follow hidden laws – laws other than those of their own making. Are our actions governed by rules and mechanisms that might, in their simplicity, match the predictive power of Newton’s law of gravitation? Heaven forbid, might we go as far as to predict human behaviour?

Until recently we had only one answer to each of these questions: we do not know. As a result, today we know more about Jupiter than we do about the behaviour of the guy who lives next door to us. But we now have access to numerical records of human behaviour that we can use to test models. Just about everything we do leaves digital breadcrumbs in some database, be it e-mails or the times of our phone conversations. The existence of these records raises huge issues of privacy, but it also creates a historic opportunity. It offers unparalleled detail on the behaviour of not one, but millions of individuals.

In the past, if you wanted to understand what humans do and why they do it, you had to become a card-carrying psychologist. Today, you may want to obtain a degree in physics or computer science first: through numerical analysis of data, scientists have found that many aspects of human behaviour follow simple, reproducible patterns governed by wide-reaching laws. Forget dice-rolling or boxes of chocolates as metaphors for life. Think of yourself as a dreaming robot on autopilot, and you will be much closer to the truth.

So you think you’re random?

My work on this topic really took off in the spring of 2004 when I was kindly given access to a substantial digital database with which I could test some models. It was an anonymous record of the e-mails sent by thousands of university students, faculty and administrators. If their activity patterns were random, the time between consecutive e-mails sent by the same individual would fit a Poisson process, which is nothing more than a sequence of truly random events. But it turned out that nobody’s e-mails followed a random, coin-flip-driven Poisson process. Instead, each user’s e-mail pattern was “bursty” – a thunder of e-mails followed by long periods of silence. Such deviations from a purely random pattern offer evidence of a deeper law or pattern that remains to be discovered.

At first sight, we would not expect our e-mail patterns to show any similarities. Some people send only a few e-mails a week; others close to a hundred each day; some peek at their e-mail only once a day. Still others practically sleep with their computers. This is why it was surprising that, when it comes to e-mail, everybody appears to follow exactly the same pattern. Indeed, looking at the times between e-mails, no-one obeyed a Poisson distribution. Instead, no matter the person, their behaviour followed what we call a “power law”.

Once power laws are present, bursts are unavoidable. Indeed, a power law predicts that most e-mails are sent within a few minutes of one other, appearing as a burst of activity in our e-mailing pattern. But the power law also foresees hours or even days of e-mail silence. In the end, the patterns of our e-mailing follow an inner harmony, where short and long delays mix into a precise law – a law that you probably never suspected you were subject to, that you never made an effort to obey, and that you most likely never even knew existed in the first place.

By mid-2004 my colleagues and I had observed a series of puzzling similarities between events of quite different natures, seeing bursts and power laws each time we monitored human behaviour. There were unexplained similarities between patterns of e-mail, Web browsing and sending jobs to a printer that demanded an explanation. For the rest of the summer I kept telling myself that there had to be a simple explanation to all of this. But my relentless probing produced nothing.

On the evening of 2 July 2004 I went to bed early, knowing that I would be getting up before dawn the next morning to travel to a conference in Bangalore. Yet, the excitement of my first trip to India kept me awake. And in that precarious twilight zone, not yet asleep but not really alert, I was suddenly struck by a simple explanation for the omnipresent bursts.

Setting priorities

The next day I returned to the musings of the night before. My twilight-zone idea had a simple premise: we always have a number of things to do. Some people use to-do lists to keep track of their responsibilities, while others are perfectly comfortable keeping them in their heads. But no matter how you track your tasks, you always need to decide which one to execute next. The question is, how do we do that?

One possibility is to always focus on the task that arrived first on your list. Waitresses, pizza-delivery boys, call-centre operators – just about everybody in the service industry practises this first-in-first-out strategy. Most of us would feel a deep sense of injustice if our bank, doctor or supermarket gave priority to the customer who arrived after us. Another possibility is to do things in their order of importance. In other words, to prioritize.

The idea that I had that night in July was deceptively simple: burstiness may be rooted in the process of setting priorities. Consider, for example, Izabella, who has six tasks on her priority list. She selects the one with the highest priority and resolves it. At that point, she may remember another task and add that to her list. During the day she may repeat this process over and over again, always focusing on the task of highest priority first and replacing it with some other job once it is resolved. The question I want to answer is this: if one of the tasks on Izabella’s list is to return your call, how long will you have to wait for your phone to ring?

If Izabella chooses the first-in-first-out protocol, then you will have to wait until she performs all of the tasks that cropped up before you. At least you know that you will be treated fairly – all the other items on her list will wait for roughly the same amount of time.

But if Izabella picks the tasks in order of importance, fairness is suddenly obsolete. If she assigns your message a high priority, then your phone will ring shortly. If, however, Izabella decides that returning your call is not at the top of her list, then you will have to wait until she resolves all tasks of greater urgency. As high-priority tasks could be added to her list at any time, you may well have to wait another day before hearing from her. Or a week. Or she may never call you back.

Once I put this priority model into a computer program, to my pleasant surprise the much-desired power law – the mathematical signature of bursts – appeared on my screen. The model consisted of a list of tasks, each randomly assigned a priority. Then I repeated the following steps over and over: (a) I selected the highest priority task and removed it from the list, mimicking the real habit I have when I execute a task; (b) I replaced the executed task with a new one, randomly assigning it a priority, mimicking the fact that I do not know the importance of the next task that lands on my list. The question I asked was, how long will a task stay on my list before it is executed?

As high-priority tasks are promptly resolved, the list becomes largely populated with low-priority tasks. This means that new tasks often supersede the many low-priority tasks stuck at the bottom of the list and so are executed immediately. Therefore, tasks with low priority are in for a long wait. After I measured how long each task waited on the list before being executed, I found the power law observed earlier in each of the e-mail, Web-browsing and printing data sets. The model’s message was simple: if we set priorities, our response times become rather uneven, which means that most tasks are promptly executed while a few have to wait almost forever on our list. I once saw a New Yorker cartoon that captured this sentiment: a businessman checks his diary and calmly says into his telephone “No, Thursday’s out. How about never – is never good for you?”.

Snail mail versus e-mail

After the publication of the priority model, I began to wonder whether the bursty pattern is a by-product of the electronic age or whether it perhaps reveals some deeper truth about human activity. All the examples we had studied so far – from e-mail to Web browsing – were somehow connected to the computer, raising the logical question of whether bursts preceded e-mail.

I soon realized that the letter-based correspondence of famous intellectuals, carefully collected by their devoted disciples, might hold the answer to this question. An online search pointed me towards the Albert Einstein Archives, a project based at the Hebrew University of Jerusalem that seeks to catalogue Einstein’s entire correspondence. Einstein left behind about 14,500 letters he had written and more than 16,000 he had received. This averages out to more than one letter written per day, weekends included, over the course of his adult life. Impressive though this is, it was not the volume of his correspondence that piqued my interest. In the spirit of the priority model, I wanted to find out how long Einstein waited before he responded to the letters he received.

It was João Gama Oliveira, a Portuguese physics student visiting my research group on fellowship, who first studied the data. His analysis showed that Einstein’s response pattern was not too different from our e-mail patterns: he replied to most letters immediately – that is, within one or two days. Some letters, however, waited months, sometimes years, on his desk before he took the time to pen a response. Astonishingly, Oliveira’s results indicated that the distribution of Einstein’s response times followed a power law, similar to the response times we had observed earlier for e-mail.

But it was not just Einstein’s correspondence that followed the pattern. From the Darwin Correspondence Project, hosted by the University of Cambridge in the UK, we obtained a full record of Charles Darwin’s letters. Given that the meticulous Darwin kept copies of every letter he either wrote or received, his record was particularly accurate. Its analysis indicated that he, too, responded immediately to most letters and only delayed addressing a very few. Overall, Darwin’s response times followed precisely the same power law as Einstein’s.

The fact that the records of two intellectuals of different generations (Einstein was born three years before Darwin’s death) living in different countries follow the same law implied that we were not looking at the idiosyncrasies of a particular person but at the basic pattern of pre-electronic communication. It also meant that it is completely irrelevant whether our messages travel on the Internet at the speed of light or are carried slowly across the ocean by steam engine. What matters is that, regardless of the era, we always face a shortage of time. We are forced to set priorities – even the greats, Einstein and Darwin, are not exempt – from which de_lays, bursts and power laws are bound to emerge.

Yet a peculiar difference remains between e-mail and letter-based correspondence: the exponent, the key parameter that characterizes any power law, is different for the two data sets. It turns out that in the power law P(τ) ~ τ–δ, which describes the probability P(τ) that a message waits τ days for a response, the exponent is δ = 1 for e-mail and δ = 3/2 for both Einstein’s and Darwin’s correspondence (see figure 1). This difference means that there are fewer long delays in e-mail correspondence than in letter writing – not a particularly surprising finding given the immediacy we often associate with electronic communication.

The truth, however, is that the difference can not be attributed to the different times it takes for letters and e-mails to be delivered. Research over many decades has told us that the exponent characterizing a power law cannot have arbitrary values but is uniquely linked to the mechanism behind the underlying process. That is, if a power law describes two phenomena but the exponents are different, then there must be some fundamental difference between the mechanisms governing the two systems. Therefore, the discrepancy meant that a new model was needed if we hoped to account for the letter-writing patterns of Einstein and Darwin.

A stack of letters

In our priority model, we assumed that as soon as the task of highest priority was resolved, a new task of random priority took its place. To derive an accurate model of Einstein’s correspondence, we needed to modify the model, incorporating the peculiarities of letter-based communication. With snail mail, each day a certain number of letters arrives by post, joining the pile of letters already waiting for a reply. And so, whenever time permitted, Einstein had chosen from the pile those letters he considered most important and replied to them, keeping the rest for another day. So a model of Einstein’s correspondence has two simple ingredients. The first is the probability with which letters landed on Einstein’s desk to which he assigned some priority, which we will call the arrival rate. (These letters increased the length of his queue, which differs from our previous e-mail model that had a fixed queue length.) The second ingredient in the model is the probability with which he chose the highest priority letter and responded to it, which we will call the response rate.

If Einstein’s response rate was faster than the arrival rate of the letters, then his desk was mostly clear, as he was able to reply to most letters as soon as they arrived. In this “subcritical” regime, the model indicates that Einstein’s response times follow an exponential distribution, devoid of long delays and clearly not in accordance with the observed power law.

If, however, Einstein responded at a slower pace than the rate at which the letters arrived, then the pile on his desk towered higher with each passing day. Interestingly, it is only in this “supercritical” regime that the response times follow a power law. Thus, burstiness was a sign that Einstein was overwhelmed and forced to ignore an increasing fraction of the letters he received.

Einstein and Kaluza

The phenomenon of burstiness, and its consequences, is clearly illustrated by Einstein’s correspondence with the theorist Theodor Kaluza. In the spring of 1919 Einstein received a letter from Kaluza, then unknown, who was still labouring to repeat the creative burst he had enjoyed in 1908. Back then, as a student of David Hilbert and Hermann Minkowski, he had written his first and only research paper. Now, a decade later and at the age of 34, he was still on the lowest rung of the academic ladder at the University of Königsberg, Germany, barely supporting his wife and child on a practically non-existent salary. When he finally finished his second paper, a rush of boldness prompted him to send it to Einstein, eliciting this encouraging reply on 21 April 1919: “The thought that electric fields are truncated…has often preoccupied me as well. But the idea of achieving this with a five-dimensional cylindrical world never occurred to me and may well be altogether new.”

Kaluza’s letter to the scientific great was more than a mere courtesy – he was asking for Einstein’s help to publish the manuscript. In those days, famous scientists like Einstein were the gatekeepers to the better scientific journals. If Einstein found the paper of interest, then he could present it at the Berlin Academy’s meeting, after which it could be published in the academy’s proceedings. To Kaluza’s joy, Einstein was willing.

Then, a week later, on 28 April, Einstein wrote a second letter to Kaluza. While encouraging, Einstein remained cautious. He would open the academy’s doors for Kaluza on one condition: “I could present a shortened version before the academy only when the above question of the geodesic lines is cleared up. You cannot hold this against me, for when I present the paper, I attach my name to it.”

Imagine the feelings elicited in Kaluza after having received two letters in as many weeks from the man who was already considered the most influential physicist alive. Both letters were encouraging, and the very fact that Einstein had bothered to write twice indicated that he was genuinely taken with the unknown physicist’s idea. But the letters were a mixed blessing and eventually prevented the paper’s publication for years.

In his response of 1 May 1919, Kaluza was quick to dispel Einstein’s concerns, prompting a further letter from Einstein on 5 May: “Dear Colleague, I am very willing to present an excerpt of your paper before the Academy for the Sitzungsberichte. Also, I would like to advise you to publish the manuscript sent to me in a journal as well, for ex[ample] in the Mathematische Zeitschrift or in the Annalen der Physik. I shall be glad to submit it in your name whenever you wish and write a few words of recommendation for it.”

What caused Einstein to change his mind so suddenly? His letter offers a hint: “I now believe that, from the point of view of realistic experiments, your theory has nothing to fear.” Kaluza could not have hoped for a better outcome. Einstein, well known for his neverending quest to confront all mathematical developments with reality, had accepted his conclusion that our world is five-dimensional.

Heartened by Einstein’s encouragement, Kaluza quickly made the requested changes and mailed back a shorter version of the paper, appropriate for presentation at the academy. His case looked really good now – he had received four letters in less than four weeks, indicating that the famous physicist had assigned him an unusually high priority. But then, in a letter dated 14 May 1919, Einstein unexpectedly gave him the cold shoulder. “Highly esteemed Colleague,” he wrote, “I have received your manuscript for the academy. Now, however, upon more careful reflection about the consequences of your interpretation, I did hit upon another difficulty, which I have been unable to resolve.” In a four-point derivation, Einstein proceeded to detail his concerns, concluding that “Perhaps you will find a way out of this. In any case, I am waiting on the submission of your paper until we have come to some resolution about this point.”

Kaluza made one final attempt to persuade Einstein of the validity of his approach, even daring to point out an error in Einstein’s arguments. This was met by a decisive reply from Einstein on 29 May 1919 in which he courteously told Kaluza that while he could not support his ideas due to continued reservations, he would gladly put in a good word should he wish to publish his findings so far. Despite the polite tone, the rejection was clear, and we know of no more exchanges between Einstein and Kaluza either that year or the next – and not because Kaluza’s paper was published. On the contrary, Einstein’s reservations sent an unmistakable message to the young scientist: the fifth dimension was a blunder, either premature or a dead end not worth further attention. After a furious burst of communication had ricocheted between the two men for a full month, a years-long silence followed.

Priority’s consequences

On 22 September 1919, four months after sending his last letter to Kaluza, Einstein shot to fame: the theory of general relativity that he had posed back in 1915 was finally confirmed by Arthur Stanley Eddington’s observation that light is bent as it passes by the Sun. Within days, Einstein’s name was on the front page of newspapers and magazines all over the world, and the Einstein myth was born. He turned into a media superstar and an icon.

Einstein’s sudden fame had drastic consequences for his correspondence. In 1919 he received 252 letters and wrote 239, his life still in its subcritical phase that allowed him to reply to most letters with little delay. The next year he wrote many more letters than in any previous year. To the flood of 519 he received, we have record of his having managed to respond to 331 of them – a pace that, though formidable, was insufficient to keep on top of his vast correspondence. By 1920 Einstein had moved into the supercritical regime, and he never recovered. The peak came in 1953, two years before his death, when he received 832 letters and responded to 476 of them. As Einstein’s correspondence exploded, his scientific output shrank. He became overwhelmed, burdened by delays. And with that his response times turned bursty and began to follow a power law, just as our e-mail correspondence does today.

Despite his brief correspondence with Einstein, Kaluza’s life improved little in the years that followed. He continued to work in academia but was unable to find a tenured position given his lack of publications. Then on 14 October 1921 he suddenly received a surprising postcard from Einstein: “Highly Esteemed Dr Kaluza, I have second thoughts about having you held back from publishing your idea about the unification of gravitation and electricity two years ago. Your approach certainly appears to have much more to offer than [Hermann] Weyl’s. If you wish, I will present your paper to the academy.” And he did, on 21 December 1921, two-and-a-half years after first learning of Kaluza’s idea.

Why this sudden reversal? Had Einstein simply been distracted by his triumph, forgetting for years about Kaluza’s extra dimension? No. The truth is that between 1919 and 1921 Einstein focused on pursuing other ideas that he had assigned higher priorities to. He had been furiously searching to codify his version of the “theory of everything”, following a direction originally proposed by Weyl. It was not until October 1921, when Einstein lost hope of success along those lines, that he returned to Kaluza’s still-unpublished paper and came to an embarrassing conclusion: he could not continue blocking the publication of Kaluza’s proposal while attempting to write his own paper inspired by it.

By the time Kaluza’s paper was eventually published, it was too late for its author. Discouraged by Einstein’s rejection, Kaluza had left physics and started anew in mathematics. But the professional switch eventually paid off – in 1929 he was offered a mathematics professorship at Kiel University and in 1935 became professor at Göttingen, one of the most prestigious universities of the time. Eventually, Kaluza’s multidimensional universe was revived in the 1980s and became the foundation for string theory, whose proponents have no fear of five-, 11- or many-more-dimensional spaces.

Sadly, Kaluza did not live to see the renaissance of his work, as he died in 1954. Might he have turned into one of the physics greats if Einstein had allowed him to publish his breakthrough early on? We will never know. But one thing is clear from Kaluza and Einstein’s brief encounter. Prioritizing is not without its consequences and it led to the demise of a young physicist’s career when his theories went ignored by the very man who could have got them published.

Discovering dark matter

The discovery of dark matter – the mysterious, invisible substance believed to make up more than 80% of the matter in the universe – would be a key moment in 21st-century physics. Hardly surprising, then, that so much attention was given to a paper written last year by the members of the Cryogenic Dark Matter Search (CDMS-II) detailing their evidence for dark matter (arXiv:0912.3592v1). The CDMS-II collaboration is looking for evidence of collisions between weakly interacting massive neutral particles (or WIMPs) – a leading candidate for dark matter – and nuclei of germanium in a detector in a mine in Soudan, Minnesota. The detector is located 700 m underground to minimize background noise from neutrons produced in cosmic-ray collisions, which can mimic real WIMP signals.

The CDMS-II collaboration strongly promoted the paper, which it submitted to arXiv on 18 December 2009. Five days before, the group circulated an e-mail flagging the upcoming paper and announcing a pair of talks that it had scheduled for 17 December. The talks – one at Fermilab and the other at the SLAC National Accelerator Laboratory – were arranged to start simultaneously. One was broadcast live over the Internet. Given the unusual lengths the CDMS-II collaboration was going to create a record of who said what and when, it was – an outside observer might conclude – about to stake a claim for discovering dark matter.

“All the physics blogosphere is abuzz,” reported the physics blog Cosmic Variance in early December as the big day approached. A film crew making a documentary about dark matter recorded the event, which was also reported by the mainstream media, including the New York Times. “The excitement in the air is palpable,” wrote Cosmic Variance blogger JoAnne Hewett an hour before the seminar started. “It looks like a signal talk,” Hewett’s colleague confided in her as the talk began.

Got sigma?

Not for long. CDMS-II spokesperson Jodi Cooley revealed that the researchers had found only two events, compared with 0.5 expected from background, yielding a confidence level of about 1.3σ, or 21%. Physicists normally expect more – at least 3σ, or 99.73%. “The results cannot be interpreted as significant evidence for WIMP interactions,” Cooley admitted in her talk, “but we cannot reject the possibility that either event is signal.”

The blogosphere crashed. Many acted betrayed. “They should have brought in Geraldo Rivera to open the signal box,” laughed one, referring to the former US talk-show host known for his melodramatic style. Some bloggers suggested that the collaboration had hyped its results to secure funding for a planned upgrade of its detector. Others thought it did so to stake a discovery claim given that XENON100 – a more sensitive, xenon detector in the Gran Sasso lab in Italy – had already begun to report results.

The scale of the build-up and let-down was itself a “signal” that something unusual was happening. The CDMS-II episode, it seemed to me, could tell us a lot about the use of statistics in science.

“Big deal,” said one physics colleague to whom I excitedly mentioned my idea. “It’s part of the business. We grapple with this kind of thing every day.” Yet to philosophers and historians, “the business” contains interesting features that physicists usually take for granted. What elements shape the role of statistics in discovery? Do researchers in different disciplines seek different levels of confidence in “a discovery”? Does an astronomer, say, want firmer evidence than a psychologist? Does suspicion of experimentalists and their methods sometimes inflate the acceptable confidence level?

The critical point

In January I devoted this column to what I saw as ambiguities in the discovery of dark energy, announcements of which were published by two different groups at slightly different dates in 1997/1998. I used the episode to examine the connection between publication date and discovery, deciding that sometimes “discoveries are not simple, unitary events made by a specific person at a specific place or time”.

This month I would like to follow a suggestion by Adam Riess, a key member of one of the two dark-energy discovery groups. Historians usually discuss credit after a discovery is made, Riess pointed out to me. But thanks to the fact that WIMP search results will roll in with increasing data from different sources in the next few years, historians have a unique opportunity to assess a discovery as it happens, allowing them to test models and assumptions.

Suppose, for instance, that XENON100, super-CDMS or some other dark-matter search turns up evidence for WIMPs at a confidence level of 2σ, or 95% confidence; will that count as a discovery? What about 3σ or 99.73%; or 5σ or 99.9999%? If the latter, will the scientific community consider the findings at lower confidence levels to warrant a partial claim to having seen WIMPs? And on what grounds could it plausibly refuse?

Then flip it around: suppose the 5σ result refutes WIMPs consistent with the previous findings – will these earlier results then be viewed as statistical flukes, incorrect claims or errors? And if so, on what grounds?

Please send me your thoughts, and I will write a follow-up column about them. If a finding of better than 5σ for WIMPs is finally established, then we will be able to compare our results with the judgment of the scientific community.

Finally, I also welcome your examples of findings that looked like promising discoveries before vanishing as more statistics were acquired – as well as non-findings that grew into findings with more statistics. Such cases have interesting implications about the nature of discovery and about who, ultimately, deserves the credit.

• What result do you think would constitute a “discovery” of dark matter? How then should we view the CDMS-II findings? Do you know of “discoveries” that grew into non-discoveries with more statistics, or vice-versa? Send your responses to Robert P Crease at the e-mail below

• To find out more about the search for dark matter, don’t miss the following exclusive video interviews.

Deep exploits — the search for dark matter

Going underground — life inside the Boulby lab

Of arrows and eternity

Space has three dimensions, and if you do not want to stay at your destination, you can buy a return ticket. Time has but a single dimension, and only one-way travel is allowed: we remember a definite, youthful past but can only imagine possible futures where we grow old and decay. Even though the fundamental laws acting on individual atoms appear to care naught for the direction of time’s axis, macroscopic phenomena most definitely do.

To resolve this conundrum, any physicist will refer you to the second law of thermodynamics, and the concept of increasing entropy. The familiar progression from order to disorder is explained as a consequence of statistics: a game of chance becomes effective certainty when more than a few particles are involved. But is it really that simple? Are all the issues resolved?

The final answers to these questions almost certainly lie in the future, as the subtitle to Sean Carroll’s book From Eternity to Here: The Quest for the Ultimate Theory of Time suggests. However, discussing them in the present can still be an interesting exercise. The ideas that Carroll, a physicist at the California Institute of Technology, puts into play are fascinating in their extent, scope and description. They cover areas as diverse as time in special and general relativity (including the question of time travel), entropy and the arrow of macroscopic time, the psychology of time; and the nature of the beginning and end of time.

Carroll’s main theme is the meaning of time’s arrow as embodied in the second law, and its relationship with cosmology and the origin of the universe. He pays particular attention to the conundrum of time as a one-way trip – an oddity that is all the more perplexing because, Carroll believes, the fundamental laws at a microscopic level are time-reversible. But while this may be true, there are nonetheless hints of time asymmetry in the behaviour of certain fundamental particles. We also inhabit a universe where, apparently, there is a gross asymmetry between matter and antimatter. Until we understand the origin of the matter/antimatter asymmetry – which is critical for our existence – I would hesitate to draw conclusions about the extent of other possible asymmetries.

These are deep waters, but Carroll navigates them successfully, thanks in part to extensive and effective footnotes that enable him to separate more technical remarks from the flow of a readable main text. This is a technique that works well. Were it used more widely in the genre, it could enable physics books to reach a wider readership, by allowing those who want to explore deep ideas to do so without at the same time frightening off more general readers. It did, however, provide me with one of the book’s few minor irritants. Occasionally a footnote was used for some more trite remark, as if the author was embarrassed to have put something meaty in the main text and wanted a jokey aside to sweeten the pill. The comparison between the public’s perceptions of Einstein and Paris Hilton was, I felt, particularly grating.

My other minor quibble concerns physics. Throughout much of the text, entropy is described as if it is a pure number. Yet in the notes and in at least one appearance in the main body, it is described in terms of Boltzmann’s constant, and hence carries dimensions of energy per degree. There was sometimes confusion as to whether entropy or log(W) was being discussed, and to what extent, if any, this mattered. If this was explained, I missed it.

Whatever the precise definition of entropy, if it should turn out that time’s arrow for macroscopic objects is tied to entropic increase, there is an unresolved enigma: why was entropy so small at the Big Bang? This forms one of the more powerful themes in Carroll’s book.

The issue of creation is itself an enigma, though not in the way creationists have argued. Some have claimed – erroneously – that the appearance of life requires a decrease in entropy, and thus implies a violation of the second law. Carroll neatly dismantles such claims by pointing out that if they were true, then refrigerators could not exist. The difference between closed and open systems is critical here, as in so many cases.

This much is well known. What I found intriguing was that Carroll then goes on to examine the entropy problem in a quantitative fashion. The sky, he writes, contains a hot Sun in a cold background – the very epitome of a non-equilibrium situation. For every high-energy photon that arrives here from the Sun, the Earth radiates 20 lower-energy photons into space. This increase in entropy exceeds the local decrease produced by the collective efforts of the biosphere. Nevertheless, if the micro-states of the planet started from utter disorder, the entire biomass could still be converted into a state of high order by such processes. And how long would this require? As far as the second law is concerned, Carroll claims, a year would be enough. Ironically, it seems that the creationists have aimed at the wrong target: physics, far from being inconsistent with biblical accounts of human existence, would allow the entire biomass to emerge within a single year, and certainly within 6000. Over to the biologists as to why it actually took billions!

The real creation of our observable universe, 13.6 billion years ago, suggests an even bigger conundrum: how did the universe’s initial state of low entropy arise? One possibility is that in an infinite and everlasting universe, entropy fluctuates. It is therefore conceivable that we could be in a 14 billion year period in which entropy has increased following a long-ago downward fluctuation, the end of which we perceive to be the start of “time”. Carroll examines this thesis, and points out its flaw: a random fluctuation capable of producing human beings would be remarkable enough (although had it not happened we would not be here to ask the question), so it seems too much to accept an entropic fluctuation that produced the order encoded in galaxies of stars – and much else that, so far as we can tell, is unnecessary for our existence.

Carroll does not discuss whether it might be “easier” to fluctuate billions of galaxies into existence than to produce sentient life. After all, if thermodynamics alone could produce a biosphere in a year, biology must introduce lots of “friction” into evolution. Billions of galaxies courtesy of fluctuation, combined with the chance that there is an Earth-like environment somewhere, might be a more efficient route to a winning lottery ticket. Can we rule that out so easily? Although it is hugely unlikely, is it any less likely than the chance that out of the effectively infinite possible combinations of DNA, it was the ones that made me and you that burst into life, enabling us to know that there is a universe?

Possibly it is. You might disagree with Carroll; you might disagree with me; but a book that makes you think is worth reading. Whether the future will show Carroll’s ideas are forever or just the latest in a never-ending debate, only time will tell.

A many-worlds thriller

History is replete with tales of scientists behaving badly, particularly when one of them dares to challenge the theories of another. Faraday battled with Ampère about the finer points of electromagnetic theory. Einstein sharply disagreed with his colleagues about the emerging field of quantum mechanics, famously declaring that “God does not play dice with the universe”. And Newton fought with everybody, including Huygens, Hooke, Flamsteed and, of course, Leibniz.

Readers who do not understand the passionate intensity of scientific arguments may find the events in Juli Zeh’s novel Dark Matter perplexing. But those who do will feel an instant affinity for the book’s central characters Sebastian and Oskar, two physicists who “were said to love physics even more than they loved each other, and [who] fought over it with the passion of rivals”. As their story shows, the opposite of love is not hate; it is indifference. Unfortunately, their shared intellectual love affair takes a destructive turn that reverberates throughout the narrative.

Zeh tells her story in bits and pieces, allowing these to accumulate slowly until they form a dazzling whole. In the beginning, we see two lifelong friends bitterly debating the philosophical implications of a physics theory that invokes the possibility of parallel worlds. Then a young boy is kidnapped and a diabolical ransom demanded. An anaesthesiologist meets a grisly end. A loyal wife loses faith in her husband. A scientist’s carefully structured life unravels. And eventually an unorthodox detective with a love of physics and an inoperable brain tumour steps in to solve his final case by connecting these seemingly random events.

A bestseller in Germany when it first appeared in 2007 under the title Schilf, this new English translation of Dark Matter (published as In Free Fall in the US) follows in the footsteps of other novels by authors who have found inspiration in esoteric physics, notably Jeannette Winterson’s Gut Symmetries and Jonathan Lethem’s As She Crawled Across the Table. But where Winterson embraced string theory and Lethem mined the mother lode of wormholes and extra dimensions, Zeh finds her muse in the “many worlds” interpretation of quantum mechanics.

First proposed in the 1950s by the physicist Hugh Everett III, the premise of the controversial “many worlds” hypothesis is straightforward enough. In any quantum system, every possible outcome for an experiment is present simultaneously in a superposition of states. The sum of all those outcomes is described by the wavefunction. It is only when we observe the system by making a measurement that the wavefunction collapses and all of those possibilities reduce to a single “real” event: the outcome of our observation.

But what happens to those other possibilities once the wavefunction has collapsed? The strictest interpretation of quantum theory simply assumes that by necessity all the other potential outcomes vanish once a measurement is made. Everett offered an alternative: perhaps the wavefunction continues to evolve, forever splitting into other wavefunctions in a never-ending tree, with every branch becoming an entire universe. In this way, every potential outcome contained in the wavefunction – a photon appearing as a particle or wave; a boy being kidnapped or not kidnapped – is realized in its own separate universe. Perhaps, as Sebastian puts it, “Everything that is possible happens.”

In Zeh’s novel, “many worlds” becomes a richly complex metaphor for regret over the road not taken. As in physics, so in life: our choices collapse our wavefunction and set us on a certain course. Sebastian’s wife, Maike, exists in a nebulous superposition of states until one day she meets her future husband on the street and her wavefunction collapses into marriage and motherhood. But perhaps there exists a parallel universe where she made a different choice, with a very different outcome.

Dark Matter is filled with split universes. Inseparable back in their university days, Sebastian quarrels with Oskar out of jealousy, and their personal and professional paths diverge. Yet even though he has chosen a rather sedate, traditional life as a happily married academic, Sebastian is filled with regret at what he has lost: those heady, passionate early days with Oskar, his intellectual soul mate. He clings to the notion of many worlds, reasoning that “[T]here must be other universes in which things went differently… In which Oskar [and I] would never lose each other.”

For his part, Oskar is equally bent on forcing his friend to confront the reality of the break, with an eye toward winning him back. It is a strategy with tragic consequences. But by far the most compelling character is Detective Schilf, whose world diverged into “before” and “after” following the loss of his wife and child. And now his mind is splitting, too, thanks to a brain tumour that he nicknames “the Observer”.

Zeh skilfully pulls together these disparate threads into a compelling intellectual thriller, in which the “villain” turns out to be as mysteriously elusive as the quantum theory of gravity Oskar pursues so single-mindedly. She only stumbles once, with the inexplicable inclusion of a ham-fisted chapter that consists of little more than Sebastian’s monologue detailing his thoughts about time, causality, coincidence, free will and the multiverse. It is overly didactic and jolts the reader out of the story just as the narrative reaches its climax. Zeh’s prose is most effective when she lets her big ideas lurk in the background, rather than take centre stage.

That quibble aside, Dark Matter admirably showcases Zeh’s meticulous plotting, skilful foreshadowing and lyrical turns of phrase; Christine Lo’s translation is sparsely elegant. Perhaps in a different novel, the motives of Zeh’s characters, and their wildly irrational responses to events as they unfold, would strike the reader as highly improbable, straining the willing suspension of disbelief to a breaking point. But in a fictional world where “everything that is possible happens”, these are just other branches in the wavefunction.

Dark energy: how the paradigm shifted

Arguably the greatest mystery facing humanity today is the prospect that 75% of the universe is made up of a substance known as “dark energy”, about which we have almost no knowledge at all. Since a further 21% of the universe is made from invisible “dark matter” that can only be detected through its gravitational effects, the ordinary matter and energy making up the Earth, planets and stars is apparently only a tiny part of what exists. These discoveries require a shift in our perception as great as that made after Copernicus’s revelation that the Earth moves around the Sun. Just 25 years ago most scientists believed that the universe could be described by Albert Einstein and Willem de Sitter’s simple and elegant model from 1932 in which gravity is gradually slowing down the expansion of space. But from the mid-1980s a remarkable series of observations was made that did not seem to fit the standard theory, leading some people to suggest that an old and discredited term from Einstein’s general theory of relativity – the “cosmological constant” or “lambda” (Λ) – should be brought back to explain the data.

This constant had originally been introduced by Einstein in 1917 to counteract the attractive pull of gravity, because he believed the universe to be static and eternal. He considered it a property of space itself, but it can also be interpreted as a form of energy that uniformly fills all of space; if Λ is greater than zero, the uniform energy has negative pressure and creates a bizarre, repulsive form of gravity. However, Einstein grew disillusioned with the term and finally abandoned it in 1931 after Edwin Hubble and Milton Humason discovered that the universe is expanding. (Intriguingly, Isaac Newton had considered a linear force behaving like Λ, writing in his Principia of 1687 that it “explained the two principal cases of attraction”.)

Λ resurfaced from time to time, seemingly being brought back into cosmology whenever a problem needed explaining – only to be discarded when more data became available. For many scientists, Λ was simply superfluous and unnatural. Nevertheless, in 1968 Yakov Zel’dovich of Moscow State University convinced the physics community that there was a connection between Λ and the “energy density” of empty space, which arises from the virtual particles that blink in and out of existence in a vacuum. The problem was that the various unrelated contributions to the vacuum energy meant that Λ, if it existed, would be up to 120 orders of magnitude greater than observations suggested. It was thought there must be some mechanism that cancelled Λ exactly to zero.

In 1998, after years of dedicated observations and months of uncertainty, two rival groups of supernova hunters – the High-Z Supernovae Search Team led by Brian Schmidt and the Supernova Cosmology Project (SCP) led by Saul Perlmutter – revealed the astonishing discovery that the expansion of the universe is accelerating. A cosmological constant with a value different to that originally proposed by Einstein for a static universe – rebranded the following year as “dark energy” – was put forward to explain what was driving the expansion, and almost overnight the scientific community accepted a new model of the universe. Undoubtedly, the supernova observations were crucial in changing people’s perspective, but the key to the rapid acceptance of dark energy lies in the decades before.

Inflation and cold dark matter

Our story begins in 1980 when Alan Guth, who was then a postdoc at the Stanford Linear Accelerator Center in California, suggested a bold solution to some of the problems with the standard Big Bang theory of cosmology. He discovered a mechanism that would cause the universe to expand more, in a time interval of about 10–35 s just after the Big Bang, than it has done in the estimated 13.7 billion years since. The implications of this “inflation” were significant.

Figure 1

Einstein’s general theory of relativity, which has so far withstood every test made of it, tells us that the curvature of space is determined by the amount of matter and energy in each volume of that space – and that only for a specific matter/energy density is the geometry Euclidean or “flat” (figure 1). In inflationary cosmology, space is stretched so much that even if the geometry of the observable universe started out far from flat, it would be driven towards flatness – just as a small patch on the surface of a balloon looks increasingly flat as the balloon is blown up. By the mid-1980s a modified version of Guth’s model was overwhelmingly accepted by the physics community.

The problem was that while inflation suggested that the universe should be flat and so be at the critical density, the actual density – calculated by totting up the number of stars in a large region and estimating their masses from their luminosity – was only about 1% of the required value for flatness. In other words, the observed mass density of conventional, “baryonic” material (i.e. protons and neutrons) appeared far too low. Moreover, the amount of baryonic matter in the universe is constrained by the theory of nucleosynthesis, which describes how light elements (hydrogen, helium, deuterium and lithium) formed in the very early universe. The theory can only match observations of the abundances of light elements if the density of baryonic matter is 3–5% of the critical density, with the actual value depending on the rate of expansion.

To make up the shortfall, cosmologists concluded that there has to be a lot of extra, invisible non-baryonic material in the universe. Evidence for this dark matter had been accumulating since 1932, when Jan Oort realized that the stars in the Milky Way are moving too fast to be held within the galaxy if the gravitational pull comes only from the visible matter. (At about the same time, Fritz Zwicky also found evidence for exotic hidden matter within clusters of galaxies.) Inevitably, the idea of dark matter was highly controversial and disputes over its nature rumbled on for the next 50 years. In particular, there were disagreements about how fast the dark-matter particles were moving and how this would affect the formation of large-scale structure, such as galaxies and galaxy clusters.

Then in March 1984, a paper by George Blumenthal, Sandra Faber, Joel Primack and Martin Rees convinced many scientists that the formation of structure in the universe was most likely if dark-matter particles have negligible velocity, i.e. that they are “cold” (Nature 311 517). They found that a universe with about 10 times as much cold dark matter (CDM) as baryonic matter correctly predicted many of the observed properties of galaxies and galaxy clusters. The only problem with this “CDM model” was that the evidence pointed to the total matter density being low – barely 20% of the critical density. However, because of the constraints of inflation, most scientists hoped that the “missing mass density” would be found when measurements of dark matter improved.

Problems with the standard theory

At this point the standard cosmological model was a flat universe with a critical density made up of a small amount of baryonic matter and a majority of CDM. Apart from the fact that most of the matter was thought to be peculiar, this was still the Einstein–De Sitter model, and the theoretical prejudice for it was strong. Unfortunately for the inflation plus CDM model, it came with one very odd prediction: it said that the universe is no more than 10 billion years old, whereas, at the time, some stars were thought to be much older. For this reason, and because observations of the distribution of matter favoured a low mean mass density, the US cosmologists Michael Turner, Gary Steigman and Lawrence Krauss published a paper in June 1984 that investigated the possibility of a relic cosmological constant (Phys. Rev. Lett. 52 2090).

The presence of Λ would cause a slight gravitational repulsion that acts against attractive gravity, meaning that the expansion of the universe would slow down less quickly. This implied that the universe was older than people thought at the time and so could accommodate its most ancient stars. Although Turner, Steigman and Krauss realized that Λ could solve some problems, they were – considering the constant’s chequered past – still wary about including it in any sensible theory. Indeed, the trio paid much more attention to the possibility that the additional mass density required for a flat universe was provided by relativistic particles that have been created recently (in cosmological terms) from the decay of massive particles surviving from the early universe.

One of the other people to tentatively advocate the return of the cosmological constant was James Peebles from Princeton University, who at the time was studying how tiny fluctuations in the density of matter would grow, due to gravitational attraction, then ultimately collapse to form galaxies. Writing in a September 1984 paper in The Astrophysical Journal (284 439), he likewise deduced that the data pointed to an average mass density in the universe of about 20% of the critical value – but he went further and said it might be reasonable to invoke a non-zero cosmological constant in order to meet the new constraints from inflation. Although Peebles was fairly cautious about the idea, this paper helped bring Λ out of obscurity and began to pave the way to the acceptance of dark energy.

Map of the cosmic microwave background

It would be tempting to think that the path to dark energy was now clear. However, astronomers realized that the mean mass density of the universe could still be as high as the critical density if there was a lot of extra dark matter hidden in the vast spaces, or voids, between clusters of galaxies. Indeed, when Marc Davis, George Efstathiou, Carlos Frenk and Simon White (following work by Nick Kaiser) ran computer simulations of the evolution of a universe dominated by CDM, they found dark matter and luminous matter were distributed differently, with more CDM in the voids. If galaxies formed only where the overall mean density was high, the simple Einstein–De Sitter flat cosmology could still agree with observations and we would not need to invoke the idea of dark energy at all.

But for those opposed to the idea of dark energy, the problem was that there was no sign of lots of missing mass in the voids. Indeed, when Lev Kofman and Alexei Starobinskii calculated the size of the tiny temperature variations in the cosmic microwave background (CMB) radiation, using different models of the universe, they found that adding Λ to the CDM model predicted fluctuations that would provide a better explanation of the observed distribution of galaxy clusters. Even if Λ were not included in the theory, observations in the late 1980s suggested that cosmological structure on very large scales could be more readily explained by a low-density universe and this, obviously, was incompatible with inflation.

Nevertheless, many people continued to believe that the idea of introducing another parameter, such as Λ, went against the principle of Occam’s razor, given that the data were still so poorly determined. It was not so much that physicists were deeply attached to the standard model, more that, like Einstein, they did not want to complicate the theory unnecessarily. Indeed, at the time, almost anything seemed preferable to the addition of Λ. As George Blumenthal, Avishai Dekel and Primack commented in 1988, introducing Λ would require “a seemingly implausible amount of fine_tuning of the parameters of the theory” (Astrophys. J. 326 539).

They instead proposed that a low-density, negatively curved universe with zero cosmological constant could explain the observed properties of galaxies, even up to large scales, if CDM and baryons contributed comparably to the mass density. This model, they admitted, conflicted with nucleosynthesis bounds, with inflation’s prediction that the universe is flat and with the small observational limit on the size of fluctuations in the CMB, but they believed that there were potential solutions. It seemed so much more aesthetically pleasing for Λ to simply be zero.

Surprising results

The quiet breakthrough came in 1990. Steve Maddox, Will Sutherland, George Efstathiou and Jon Loveday published the results of a study of the spatial distribution of galaxies, based on 185 photographic plates obtained by the UK Schmidt Telescope Unit in Australia (Mon. Not. R. Astron. Soc. 242 43). High-quality, glass copies of the plates were scanned using an automatic plate measuring (APM) machine that had recently been developed at Cambridge University by Edward Kibblewhite and his group. This remarkable survey – the largest in more than 20 years – covered more than 4300 square degrees of the southern sky and included about two million galaxies, looking deep into space and far back in time.

Astonishingly, the results from the APM galaxy survey did not match the standard CDM plus inflation model at all. On angular scales greater than about 3°, the survey provided strong evidence for the existence of galaxy clustering that was simply not predicted by the standard model. In 1990 Efstathiou, Sutherland and Maddox wrote a forthright letter to Nature (348 705), in which they argued that CDM and baryons accounted for only 20% of the critical density. The remaining 80%, they inferred, was provided by a positive cosmological constant, and this soon became known as the ΛCDM model.

The case for a low-density, CDM model came from the APM galaxy survey and a redshift survey of over 2000 galaxies detected by the infrared astronomical satellite (IRAS). The case for a positive cosmological constant now had several arguments in its favour: inflation, which required a flat universe; the small size of the temperature fluctuations in the CMB; and the age problem. “A positive cosmological constant”, wrote Efstathiou, Sutherland and Maddox, “could solve many of the problems of the standard CDM model and should be taken seriously.” This was the strongest appeal yet made in favour of bringing Einstein’s Λ back into cosmology. The APM result for a low mass density of the universe was later confirmed by the 2dF Galaxy Redshift Survey and the Sloan Digital Sky Survey.

Return of the cosmological constant

Soon, others also began looking seriously at the case for ΛCDM. For example, in 1991 one of the present authors (OL), with Per Lilje, Primack and Rees, studied the implications of the cosmological constant on the growth of structure, and concluded it agreed with data available at the time (Mon. Not. R. Astron. Soc. 251 128). But researchers were still reluctant to embrace Λ fully. In 1992 Sean Carroll, William Press and Edwin Turner underlined the problems of considering Λ as the energy density of the vacuum – the coincidence problem (figure 2) and the fact that quantum mechanics predicts a far higher value than observations permit (Ann. Rev. Astron. Astrophys. 30 499). They pointed out that a flat universe plus Λ model effectively required the inclusion of non-baryonic CDM, which meant there were then two highly speculative components of the universe.

Figure 2

In 1993 an article appeared in Nature (366 429) on the baryon content of galaxy clusters, written by Simon White, Julio Navarro, August Evrard and Carlos Frenk. They studied the Coma cluster of galaxies, which is about 100 megaparsecs away from the Milky Way and contains more than 1000 galaxies. It is assumed, from satellite evidence, to be typical of all clusters rich in galaxies and its mass has three main components: luminous stars; hot, X-ray emitting gas; and dark matter. White and colleagues realized that the ratio of baryonic to total mass in such a cluster would be fairly representative of the ratio in the universe as a whole. Plugging in the baryon mass fraction from the nucleosynthesis model would then give a measure of the universe’s mean mass density.

After taking an inventory, using the latest data and computer simulations, the group concluded that the baryonic matter was a larger fraction of the total mass of the galaxy cluster than was predicted by a combination of the nucleosynthesis constraint and the standard CDM inflationary model. Baryons could have been produced in the cluster during its formation (by cooling, for example) but the number created would not have been enough to explain the discrepancy. The most plausible explanations were either that the usual interpretation of element abundances (nucleosynthesis theory) was incorrect, or that the mean matter density fell well short of the critical density. Once again, the standard CDM model, with the mass density equal to the critical density, was inadequate. The way to satisfy the constraint from inflation that the universe is flat would be to add a cosmological constant.

Luckily for the theorists, astronomers continued to refine and improve their observations through the development of new equipment and techniques. In particular, tiny variations in the CMB – as measured by NASA’s Cosmic Background Explorer (COBE) satellite and later by the Wilkinson Microwave Anisotropy Probe (WMAP) satellite – dramatically showed that the mass density fell far short of the critical density and yet favoured a universe with an overall flat geometry. Something other than CDM must be providing the mass/energy needed to reach the critical value.

Although there were still a number of proposed variations on the CDM model, ΛCDM was the only model to fit all the data at once. It appeared that matter accounted for 30–40% of the critical density and Λ, as vacuum energy, accounted for 60–70%. Jeremiah Ostriker and Paul Steinhardt succinctly summed up the observational constraints in 1995 in an influential letter to Nature (377 600). The case rested strongly on measurements of the Hubble constant and the age of the universe, and results from the Hipparcos satellite in 1997 finally brought age estimates of the oldest stars down to 10–13 billion years.

Most physicists were still reluctant to consider the idea that Λ should be brought back into cosmology, but the stage was now set for a massive shift in opinion (see “Dark energy” by Robert P Crease). Some people suggested other possibilities for the missing component, and the name “dark energy” was introduced by Michael Turner in 1999 to encompass all the ideas. When the supernova data of the High-Z and SCP teams indicated that the expansion of the universe is accelerating, the rapid embrace of dark energy was in large part due to the people who had argued for its return in the 1980s and 1990s.

Towards a new paradigm shift?

If recent history can teach us anything, it is to not ignore the evidence when it is staring us in the face. While the addition of two poorly understood terms – dark energy and dark matter – to Einstein’s theory may spoil its intrinsic elegance and beauty, simplicity is not in itself a law of nature. Nevertheless, although the case for dark energy has been strengthened over the past dozen years, many people feel unhappy with the current cosmological model. It may be consistent with all the current data but there is no satisfactory explanation in terms of fundamental physics. As a result, a number of alternatives to dark energy have been proposed and it looks likely that there will be another upheaval in our comprehension of the universe in the decade ahead (see “Future paradigm shifts?” below).

Figure 3

The whole focus of cosmology has altered dramatically and many astronomical observations now being planned or under way are mainly aimed at discovering more about the underlying cause of cosmic acceleration. For example, the ground-based Dark Energy Survey (DES), the European Space Agency’s proposed Euclid space mission, and NASA’s planned space-based Joint Dark Energy Mission (JDEM) will use four complementary techniques – galaxy clustering, baryon acoustic oscillations, weak gravitational lensing and type 1a supernovae – to measure the geometry of the universe and the growth of density perturbations (figure 3). The DES will use a 4 m telescope in Chile with a new camera that will peer deep into the southern sky and will map the distribution of 300 million galaxies over 5000 square degrees (an eighth of the sky) out to a redshift of 2. The five-year survey involves over 100 scientists from the US, the UK, Brazil and Spain, and is due to begin in 2011.

The mystery of dark energy is closely connected to many other puzzles in physics and astronomy, and almost any outcome of these surveys will be interesting. If the data show there is no longer a need for dark energy, it will be a major breakthrough. If, on the other hand, the data point to a new interpretation of dark energy, or to a modification to gravity, it will be revolu_tionary. Above all, it is essential that astrophysics continues to focus on a diversity of issues so that individuals have the chance to do creative research and suggest new ideas. The next paradigm shift in our understanding may not come from the direction we expect.

Future paradigm shifts?

Cosmologists still have no real idea what dark energy is and it may not even be the answer to what makes up the bulk of our universe. Here are a few potential paradigm shifts that we may have to contend with.

  • Violation of the Copernican principle At the moment we assume that the Milky Way does not occupy any special location within the universe. But if we happen to be living in the middle of a large, underdense void, then it could explain why the type 1a supernovae (our strongest evidence for cosmic acceleration) look dim, even if no form of dark energy exists. However, requiring our galaxy to occupy a privileged position goes against the most basic underlying assumption in cosmology.
  • Is dark energy something other than vacuum energy? Although vacuum energy is mathematically equivalent to Λ, the value predicted by fundamental theory is orders of magnitude larger than observations can possibly permit and there is no accepted solution to this problem. Many interesting ideas have been proposed, including time-varying dark energy, but even they do not address the “coincidence” problem of why the present epoch is so special.
  • Modifications to our understanding of gravity It may be that we have to look beyond general relativity to a more complete theory of gravity. Exciting new developments in “brane” theory suggest the influence of extra spatial dimensions, but it is likely that the mystery of dark energy and cosmic acceleration will not be solved until gravity can successfully be incorporated into quantum field theory.
  • The multiverse Λ can have a dramatic effect on the formation of structure in the universe. If Λ is too large and positive, it would have prevented gravity from forming large galaxies and life as we know it would never have emerged. Steven Weinberg and others used this anthropic reasoning to explain the problems with the cosmological constant and predicted a value for Λ that is remarkably close to what was finally observed. However, this use of probability theory predicted an infinite number of universes in which Λ takes on all possible values. Many scientists mistrust anthropic ideas because they do not make falsifiable predictions and seem to imply some sort of life principle or intention co-existing with the laws of physics. Nevertheless, string theory predicts a vast number of vacua with different possible values of physical parameters, and to some extent this legitimates anthropic reasoning as a new basis for physical theories.

At a glance: The paradigm shift to dark energy

  • Dark energy is a mysterious substance believed to constitute 75% of the current universe. Proposed in 1998 to explain why the expansion of the universe is accelerating, dark energy has negative pressure and causes repulsive gravity
  • Data suggest that dark energy is consistent (within errors) with the special case of the cosmological constant (Λ) that Einstein introduced in 1917 (albeit for a different reason) and then abandoned. Λ can be interpreted as the vacuum energy predicted by quantum mechanics, but its value is vastly smaller than anticipated
  • During the 20th century, Λ was reintroduced a number of times to explain various observations, but many physicists thought it a clumsy and ad hoc addition to general relativity
  • The rapid acceptance of dark energy a decade ago was largely due to the work of researchers in the 1980s and early 1990s who concluded that, in spite of the prejudice against it, Λ was necessary to explain their data
  • We still have no fundamental explanations for dark energy and dark matter. The next paradigm shift could be equally astonishing and we must be ready with open minds

More about: The paradigm shift to dark energy

L Calder and O Lahav 2008 Dark energy: back to Newton? Astron. Geophys. 49 1.13–1.18
B Carr and G Ellis 2008 Universe or multiverse? Astron. Geophys. 49 2.29–2.33
E V Linder and S Perlmutter 2007 Dark energy: the decade ahead Physics World December pp24–30
P J E Peebles and B Ratra 2003 The cosmological constant and dark energy arXiv: astro-ph/0207347v2

Web life: Physics-GamesDotNet

So what is this site about?

This is one of those sites where the name says it all. Physics-GamesDotNet is a one-stop shop for clever, innovative and sometimes silly games that feature physically realistic actions and effects. Common formats include bridge-building games, demolition games, brick-stacking games, catapult games and games that require players to move objects from one place to another using levers, inclined planes, rollers and other simple mechanisms. This may not sound like groundbreaking stuff – anyone for a game of Pong? – but closer inspection reveals some surprisingly sophisticated behaviour. Thanks to software that was once the preserve of scientific simulations, the towers in these games totter and tip before they fall over; rolling balls slow down on rough surfaces; and bridges give way under heavy weights. The result is a cross between a game and a basic physics lesson. It is not quite educational, but it is hardly mindless entertainment either.

Can you give me some examples?

Most readers will be familiar with the game Tetris, which requires players to manoeuvre differently shaped blocks into position. The Physics-GamesDotNet variant, 99 Bricks, uses the same set-up, but here the resulting stack is inherently unstable; players must build carefully to ensure that their tower stays upright. Another game, Water Werks, is like a liquid version of pinball: players use the pressure from a (virtual) jet of water to turn wheels and activate springs that guide balls towards an exit. And then there are some games that defy easy categorization. In Home Sheep Home, for example, players must solve physics-based puzzles to guide Shaun the Sheep and his woolly companions back to their barn.

Who created the site?

The site’s administrator is Jeremy Oduber, a student at the University of Amsterdam who has been interested in both science and games since he was a child. He finds physics games particularly appealing because “you can see physics happening all around you every day”. Moreover, he believes that games that incorporate realistic physics tend to be more open-ended than those that do not – meaning that there is usually no “best” way of beating a game or completing a level.

But how much physics is there, really?

It depends on how you look at it. To the casual gamer, the answer is probably “not much”. Although some bridge-building games do provide qualitative feedback on stresses and strains, anyone who wants numerical values for, say, a virtual object’s mass would be better off using a stand-alone physics simulator like Algodoo (see “Web life: Phun). However, those who dig a little deeper into the world of physics simulation may be surprised at just how much complexity is involved in these relatively simple games. A typical physics simulator, or “engine”, incorporates both gravity and some kind of collision-response mechanism when solving the equations of motion for virtual cannonballs, blocks and so on, while more sophisticated engines also factor in rotations. Not too long ago, only supercomputers could perform such calculations rapidly enough to simulate realistic-looking physical behaviour. So a better answer to the question might be “quite a lot, actually – you just have to look for it”.

Who designs the games?

The games Oduber selects have been developed by people all over the world, from professional designers to teenagers working out of their bedrooms. To appear on the site, games must be bug-free and fun to play – and, of course, they must incorporate physics.

Who is it aimed at, and why should I visit?

Games like the ones on this site are, in Oduber’s view, “great at illustrating some basic concepts of physics in a fun way”. For younger children, we agree with him – particularly if, as is often the case, hands-on alternatives to cartoon wheels and levers are unavailable. But students with exams looming this month should not treat a few rounds of Crush the Castle as a substitute for reviewing their physics notes. Apart from anything else, the games on this site are amazingly addictive. We challenge readers to navigate the gravitational fields in Cosmic Crush or shoot their way through the levels in Ragdoll Cannon without feeling a little rush of excitement. Go on. Try it.

Copyright © 2026 by IOP Publishing Ltd and individual contributors