Skip to main content

Spotting fake bank notes with butterfly colour

When it comes to head-turning fashion the animal kingdom often steals the show with its fantastic “structural colours” that can manipulate light in some weird and wonderful ways. One such beauty is Papilio blumei, a butterfly native to Indonesia, whose wings manage to combine green and blue in varying mixes depending on your viewing angle. This particular effect has now been mimicked by a group of researchers in the UK who say that their man-made structural colours could be added to bank notes to help prevent forgery.

At first glance the wings of Papilio blumei appear to be dominated by bright green coloured areas. Closer inspection, however, reveals that the wings are speckled with cavities that are yellow at the centre, gradually blending into blue at the tips. Light from the centre of the cavity is directly reflected whereas light hitting the edges is initially deflected towards a substructure in the cavity, which consists of alternating layers of cuticle and air. When it finally re-emerges, the light has been partially polarized and comprises a mixture of wavelengths, creating the effect of structural colour.

The mechanism of this colour mixing was detailed in a pair of papers published in 2000 and 2001 co-authored by Pete Vukusic at the University of Exeter. Ten years later Vukusic has teamed up with researchers at the University of Cambridge to recreate the effect in the laboratory. The researchers implanted plastic spheres, with diameters of just 5 µm, into a gold-coated silicon substrate, before blasting them away using an ultrasound etching technique. This left a number of dimples in the gold surface, which simulates the butterfly’s wing cavities. The researchers then used atomic layer deposition to overlay 11 alternating layers of alumina and titania, which recreates the multilayer structures within the butterfly’s cavities. Each layer of titania was approximately 60 nm thick and each layer of alumina was approximately 80 nm, where the sizes were chosen to match the green-yellow reflectance band of Papilio blumei.

Scaling-up

Jeremy Baumberg, one of the researchers based at the University of Cambridge, says that it has taken 10 years to recreate the effect because of the intricate nature of the cavity structure. However, he says that the team’s production process could be easily scaled-up to produce large quantities of materials that create these colour-mixing effects. He says that the effect could be reproduced using cheaper materials and the only reason gold was used is because his research group is also interested in plasmonics, a field of research involving the interaction of light with metals.

If these structured materials could be mass-produced they could be used to prevent forgery because many items now require some kind of official marker, from bank notes and credit cards to tickets for events. Baumberg believes that marking items with structural colours could offer tighter security than some of the established methods, such as watermarking and holograms, which have become easier to forge in recent years. “The difference is that our process is bottom-up – the final appearance of the colours is altered dramatically by just subtle differences in the initial production. Top-down technologies such as holograms can be quite prescriptive and therefore easier to mimic.”

It is still not fully understood how Papilio blumei manages to develop its structural colours, partly because researchers do not yet have the microscopy techniques to get a detailed view of what is going on as the young butterfly develops inside its cocoon. Baumberg and his team are interested in exploring this question, but they are also interested in other animals and plants whose bodies include structural colours. There are many other examples, such as beetle shells and shrimp eyes, where evolutionary biologists believe that animals may have developed the ability to polarize light as a survival tactic.

This research is published in Nature Nanotechnology.

Particle physicists through the eyes of children

Picture-4.png
‘Draw me a physicist, please’ credit: CERN

By James Dacey

Here’s my pick from a collection of artwork produced by schoolchildren in France and Switzerland who were asked to “draw me a physicist”.

The kids, who came from 20 primary school classes from the Pays de Gex and the Canton of Geneva, were given the opportunity to visit CERN and to interview some of the physicists there.

“The picture of the world of research we get from them is full of surprises,” explains Corinne Pralavorio, who handled the project on the CERN side. “It’s a mirror, allowing us to see how young people out there perceive scientists.”

There is a history dating back to the 1970s of sociologists using artwork to gauge children’s perceptions of scientists. It is thought of as a useful way to explore some of the assumptions and stereotypes that may encourage or deter pupils from opting to take a career in science.

From 12 to 23 June CERN will exhibit more than 160 drawings and definitions by children on the subject of scientific research. If you can’t make it (or can’t wait that long), then you can see a selection of the images here.

Entangling photons with electricity

Researchers in Cambridge in the UK have succeeded in generating entangled photons using electricity alone, with a new device called an “entangled light-emitted diode” (ELED). The device converts electrical current directly into entangled light rather than relying on laser power as in previous technology. The technique could be a practical way to integrate many entangled light sources together on a single chip – something that will be crucial for making a real-world optical quantum computer.

Entanglement allows particles to have a much closer relationship than is possible in classical physics: if two particles are entangled, we can automatically know the state of one particle by measuring the state of the other – despite the state of either being impossible to guess before the measurement. For example, two photons can be entangled such that they are always measured to have the same linear polarizations, even though we cannot predict that polarization beforehand.

Quantum mechanics also says that the particle can exist in a superposition of two states at the same time. Such a phenomenon could be used to advantage in a quantum computer, which in principle could outperform a classical computer for certain tasks. This is because ordinary computers use bits of information that are assigned either 1 or 0, while a quantum computer would use quantum bits of information, or qubits, that can be in a superposition of both 1 and 0 at the same time. A 1 could represent, say, a horizontally polarized photon, while 0 could represent a vertically polarized one.

Crafting the light

Andrew Shields and Mark Stevenson of Toshiba Research Europe together with colleagues from the University of Cambridge made their ELED using a standard semiconductor fabrication technique similar to those used to make ordinary LEDs. This involved growing semiconductor layers using molecular beam epitaxy followed by processes to define the active area of the LED and add electrical contacts. The ELED differs from an ordinary LED in that it contains quantum dots – tiny nanometre-sized islands of semiconductor.

The quantum dot can be tuned to capture two electrons and two holes, which puts the system into a “biexciton” state. This then decays into a ground state through one of two intermediary exciton states, the pathway determining the polarization of the resulting pairs of photons. If the fine structure splitting between these two states is approximately zero then the only way to determine the decay path is to measure the polarization of the photons – the photons are, therefore, said to be entangled.

Although this process has been used previously to emit single pairs of photons, it has never been able to entangle photons in large quantities. Key to achieving this was optimizing the thickness of the semiconductor material surrounding the quantum dot to control the supply of current to it, to prevent electrons from tunnelling into the quantum dot from the n-doped region, which would destroy the entanglement. It was also important to carefully tailor the single quantum dot at the heart of the device to ensure that it emitted photons with an energy of 1.4 eV that have a very small fine split between their two production routes.

High fidelity photons

The device emits individual entangled pairs of photons when a pulsed current is applied and has an “entanglement fidelity” of 0.82 – a figure that is high enough for it to be used in quantum relays, which are related to core components of quantum computing such as teleportation). Entanglement fidelity is a measure of how pure the entangled light is: if the value exceeds 0.5, light is entangled, with 1 being the maximum value.

Although the researchers previously created entangled light that had a higher fidelity of 0.91, this involved more complicated methods that required shining an intense laser beam onto quantum dots in crystals. The new device, on the other hand, is simply powered by a voltage source. Other laser-driven techniques, such as “parametric down conversion” of photons, can produce entangled light with an even higher fidelity still but these are random processes, which means that the number of entangled photons created in a cycle varies. Indeed, zero, two, or more pairs can be created – something that is a problem for quantum computing applications.

“Quantum dot sources such as the ELED do not suffer from this fundamental limitation and in principle, operate ‘on demand’ generating one entangled photon pair every cycle,” Stevenson told physicsworld.com. “The fidelity of our ELED is remarkable considering this is the first device of its kind. In theory, it could be much higher.”

The Cambridge team hopes that its device could eventually help make practical optical quantum computers that require many entangled light sources on a single chip. This is difficult with other methods that rely on laser light as the power source because the hardware associated with generating, distributing and focusing the light quickly becomes too big and complex. Stevenson says that quantum computing could help to tackle many intractable problems such as climate modelling and in pharmaceutical research.

The work was published in Nature.

Earth’s random walk could jolt particle accelerators

A study in the US has revealed that, apart from motion due to tides, seismic activity and other geophysical phenomena, the ground also moves entirely at random, at least over scales ranging from metres to kilometres. Vladimir Shiltsev came to this conclusion after using data from several particle accelerator facilities, where accurate information about the facility’s precise position is essential. The result confirms a simple equation he put forward to describe the motion and may also prove useful in the design of future particle accelerators.

Much of the early work on random ground motion was done by Russian researchers who had worked on the design of VLEPP, a planned electron–proton collider in Protvino, about 100 km south of Moscow. VLEPP was to have operated in the trillion electron volt range, comparable to the energy of CERN’s Large Hadron Collider, which is today the world’s largest and most powerful accelerator. Although VLEPP was scrapped with the collapse of the Soviet Union, Russian physicists continued to publish papers on random ground motion.

Back-of-the-envelope calculation

Shiltsev, now working at Fermilab, was just starting his career when some of these papers appeared, and has spent the past two decades trying to expand on some of the preliminary findings. This led him to create a simple, back-of-the-envelope calculation to estimate how the distance between two points will change over time. It involves multiplying the distance between two points along an accelerator by both the elapsed time and a coefficient describing the amount of random motion in a give locale, generally around 100 nm in any direction each minute. The square root of that number gives the rate of movement.

Shiltsev has now confirmed the accuracy of his calculation using alignment data from 15 accelerator facilities in Europe, Japan and the US collected over two decades. Such data, gathered from various sensors and laser trackers, is taken in the course of day-to-day operations. This is because even slight displacements of a magnet or other component can throw an accelerated beam of particles off-course and ruin an experiment.

The reason for the random motion of the ground, however, remains unclear. Shiltsev suspects that a fractal-like structure of the ground might offer an explanation. Shiltsev’s hypothesis is that the ground is analogous to a collection of tightly-packed blocks of similar shapes and varying sizes, with smaller blocks near the surface and larger blocks at greater depths. If each of these blocks jiggles at random, that would explain the effects seen in the alignment data, he says.

Tunnelling for fractals

Geophysicists have observed fractal patterns around active faults. However, more data, especially at varying depths, are needed to connect observed random ground motion to Shiltsev’s large-scale fractal patterns in the Earth. To collect such data, Shiltsev and Fermilab colleague Jim Volk are installing sensors in tunnels in the Deep Underground Science and Engineering Lab (DUSEL), which is being built 2 km below ground in South Dakota’s Homestake mine. DUSEL, which will be used for neutrino research and other experiments, should be operational by 2014.

Practical use of the simple equation confirmed by Shiltsev is unlikely to come until new linear accelerators such as the International Linear Collider (ILC) are built. “Ground motion is one of the major factors limiting the predicted performance of future linear particle colliders,” says Christophe Collette, a CERN researcher working on designs for the Compact Linear Collider (CLIC), an ILC-like machine that would stretch 50 km or more in a straight line. Such colliders would focus their particle beams to just a few nanometres across, requiring even better alignment than accelerators in use today.

The current work is published in Physical Review Letters.

Spot the difference

Picture-4.png
Can you tell the difference? (credit: US/LHC blog)

By Michael Banks

At first glance it looks like an average webpage from the arXiv preprint server – a website where researchers upload their papers before publishing them in a scientific journal.

But with article authors including “C H Fermi”, “S C Boltzmann”, or “L Heisenberg” you could be somewhat suspicious whether it is indeed authentic.

The website is snarXiv and has been created by David Simmons-Duffin, a PhD student in high-energy physics at Harvard University. It randomly generates titles and abstracts in high-energy physics taking into account the latest trends in the subject and presents them in an identical way as the arXiv server does.

Simmons-Duffin writes on his blog that he does not remember exactly why he decided to set up the website. However, he claims that it does serve some purpose.

For example, Simmons-Duffin notes that if you are a graduate student you can “gloomily read through the abstracts, thinking to yourself that you do not understand papers on the real arXiv any better”. And if you are a post-doc then you can keep reloading the webpage “until you find something to work on”.

Simmons-Duffin has even made a game where you have to spot the real title from the randomly generated one (the real one being a title from an arXiv paper and the random one a title from a snarXiv paper).

Try it for yourself. I managed to get 5 out of 8 correct, which ranked me rather unkindly as an “undergraduate”. (Other ranks include “better than a monkey” or “worse than a monkey” and it seems the top rank is “Nobel prizewinner”.)

A liberal sprinkling of quantum dots

Researchers at Rice University claim to have found a way to add electronic and optical elements to the “wonder material” graphene without sacrificing its mechanical properties. They have designed a technique for patterning the hydrogenated form of graphene with quantum dots – a phenomenon that promises many novel applications.

Graphene consists of 2D sheets of carbon just one atom thick arranged in a honeycombed lattice. Many scientists believe that the material could replace silicon as the electronic material of choice in the future thanks to its unique electronic and mechanical properties that would allow smaller devices to be made. Graphane is formed by simply adding hydrogen atoms to both sides of the graphene matrix. This material is an insulator, while graphene behaves like a metal.

Boris Yakobson and colleagues have found that removing hydrogen atoms from 2D sheets of graphane opens up tiny spaces in pure graphene that behave like quantum dots. Quantum dots are nanosized pieces of semiconductor in which electrons (or holes) are confined in 3D and their electronic properties can be controlled by changing the size of the dots. Quantum dots could be used to make nanoelectronic and optical circuits and devices, such as chemical sensors, solar cells and even semiconducting lasers, because they interact with light and magnetic fields in unique ways.

Phase transition

This phase transformation from graphene to graphane, accompanied by the change from metal to insulator, offers exciting opportunities for nanoengineers, says Yakobson. “If experimentally feasible, we can move from the labs to hopefully industrial tests and produce very small stable circuits and devices,” he told physicsworld.com. He says that the new technique has clear advantages over existing methods to create graphene-based electronic devices such a cutting nanoribbons and then re-assembling them, which can be a very intricate procedure.

The researchers also discovered that when they removed pieces of the hydrogen sub-lattice, the area left behind tended to be hexagonal in shape with a sharp interface between the graphene and graphane. This means that each dot is self-contained and that little charge leaks across from the graphene quantum dots into the graphane host material. Although the Rice scientists do not yet know how to physically create arrays of quantum dots in sheets of graphane, they believe the obstacle “shouldn’t be insurmountable”.

Before this can happen though, the team says it needs to better understand distortions at the graphane-graphene interface and also if there is likely to be frustration there, disordered hydrogen atoms and possible roughness at the border lines. The researchers also need to find out whether the interface remains robust when placed on a support substrate, such as silicon.

The current work was reported in ACS Nano.

Giant hole opens in Guatemala

chair.jpg
Courtesy: Guatemalan Government

By James Dacey

For me it is the sheer precision that is so astonishing.

This image shows a 60 m “sinkhole” that opened up on Sunday in Guatemala City, a result of the tropical storm Agatha that has been bombarding Central America.

The phenomenon is a common feature of karst landscapes, common in all continents except Antarctica. The bedrock in these zones is usually formed of carbonates such as limestone, which are highly prone to chemical weathering and dissolution. Sinkholes can result when underground cavities can no longer support the overlying sediment, and they can be triggered by even a small amount of rainfall.

The opening of this sinkhole is not reported to have killed anyone, unlike a separate hole in the same area that killed 3 people back in 2007.

Double celebration for neutrino lab

Particle physicists at the Gran Sasso laboratory in Italy have two reasons to celebrate. One is the first ever detection, by the OPERA experiment, of a neutrino that has mutated from another kind of neutrino as it travelled through space. The second achievement is the start-up of the ICARUS detector, which, like OPERA, will study neutrinos that have “oscillated” on their journey from the CERN laboratory outside Geneva in Switzerland.

Neutrinos are chargeless fundamental particles that come in three varieties, or “flavours” – electron, muon and tau. In the 1950s the Italian physicist Bruno Pontecorvo predicted that neutrinos should change, or oscillate, from one flavour to another as they travel through space, a property that would imply neutrinos have mass – in contradiction with the basic formulation of the Standard Model of particle physics. This idea was subsequently supported by experiments that found the Sun to be producing fewer electron neutrinos than had been expected, and by later experiments that detected a shortfall in muon neutrinos produced by cosmic rays interacting in the Earth’s atmosphere.

In these experiments, the phenomenon of oscillation is only inferred indirectly. A reduction in the number of neutrinos from their source to their detection is taken to mean that some of these particles have oscillated to a different flavour of neutrino that cannot be picked up by the detector.

A different approach

OPERA and ICARUS, which are located in the laboratory of the Italian National Institute for Nuclear Physics some 1400 m under the surface of the Gran Sasso mountain in central Italy, take a different approach. Both detectors are designed to make a positive sighting of the tau neutrinos that theory predicts will result from the oscillation of some of the muon neutrinos contained within a beam that is produced at CERN and fired 730 km through the Earth to Gran Sasso. Although physicists believe that the different absence measurements, taken together, constitute very strong evidence for neutrino oscillation, the tau sightings would rule out the slim possibility that the disappearing muon neutrinos are instead decaying or disappearing off into higher dimensions.

The 1250 tonne OPERA instrument, built by a collaboration of around 170 physicists from 12 countries, detects neutrinos using 150,000 ”bricks”, with each brick consisting of many alternating layers of lead and films of nuclear emulsion. These bricks record the tracks of the decay products that result from the interaction of neutrinos with the lead nuclei, each neutrino generating tracks with a distinctive shape. Tau neutrinos produce a charged particle known as a tau lepton, which then decays into a muon, hadron or electron, generating a very short track with a distinctive kink in it.

Weak interaction

The fact that neutrinos interact extremely weakly with normal matter means that only a tiny fraction of the billions of neutrinos in the CERN beam that pass through OPERA every second will leave their mark in the detector. Since it started up in 2006, the experiment has detected a few thousand muon neutrinos but it was not until August 22 last year that it detected its first tau neutrino – on account of the delicate measurement. The researchers say to a confidence of 98% that their signal is due to a tau neutrino. According to OPERA spokesman Antonio Ereditato of the University of Bern in Switzerland, the unambiguous observation of a tau neutrino will require several more events like the one recorded so far. “This might require a few more years,” he says, “but physicists are patient”.

The other news at the Gran Sasso is the opening of another neutrino detector, ICARUS, which uses quite a different detection technique. First proposed in 1977 by Carlo Rubbia, who would go on to share the Nobel prize in 1984 for the discovery of W and Z bosons, this involves filling a tank with a large amount of liquid argon, lining the walls of the tank with planes of wires and then setting up a large potential difference across the tank. Any charged particles passing through the tank create pairs of positively charged ions and electrons as they travel, with those electrons that do not recombine then drifting towards the wire planes where they register a signal. The spatial sequence of signals recreates the path of the charged particles and reveals whether or not those particles were produced by a tau neutrino. ICARUS recorded its first events on 27 May.

You’re so predictable

When it comes to the actions of our fellow humans, the sequence of events we witness on a daily basis appears to be just as mysterious and confusing as the motion of the stars seemed in the 15th century. At other times, although we are free to make our own decisions, much of our life seems to be on autopilot. Our society goes from times of plenty to times of want, from war to peace and back to war again. It makes one wonder whether humans follow hidden laws – laws other than those of their own making. Are our actions governed by rules and mechanisms that might, in their simplicity, match the predictive power of Newton’s law of gravitation? Heaven forbid, might we go as far as to predict human behaviour?

Until recently we had only one answer to each of these questions: we do not know. As a result, today we know more about Jupiter than we do about the behaviour of the guy who lives next door to us. But we now have access to numerical records of human behaviour that we can use to test models. Just about everything we do leaves digital breadcrumbs in some database, be it e-mails or the times of our phone conversations. The existence of these records raises huge issues of privacy, but it also creates a historic opportunity. It offers unparalleled detail on the behaviour of not one, but millions of individuals.

In the past, if you wanted to understand what humans do and why they do it, you had to become a card-carrying psychologist. Today, you may want to obtain a degree in physics or computer science first: through numerical analysis of data, scientists have found that many aspects of human behaviour follow simple, reproducible patterns governed by wide-reaching laws. Forget dice-rolling or boxes of chocolates as metaphors for life. Think of yourself as a dreaming robot on autopilot, and you will be much closer to the truth.

So you think you’re random?

My work on this topic really took off in the spring of 2004 when I was kindly given access to a substantial digital database with which I could test some models. It was an anonymous record of the e-mails sent by thousands of university students, faculty and administrators. If their activity patterns were random, the time between consecutive e-mails sent by the same individual would fit a Poisson process, which is nothing more than a sequence of truly random events. But it turned out that nobody’s e-mails followed a random, coin-flip-driven Poisson process. Instead, each user’s e-mail pattern was “bursty” – a thunder of e-mails followed by long periods of silence. Such deviations from a purely random pattern offer evidence of a deeper law or pattern that remains to be discovered.

At first sight, we would not expect our e-mail patterns to show any similarities. Some people send only a few e-mails a week; others close to a hundred each day; some peek at their e-mail only once a day. Still others practically sleep with their computers. This is why it was surprising that, when it comes to e-mail, everybody appears to follow exactly the same pattern. Indeed, looking at the times between e-mails, no-one obeyed a Poisson distribution. Instead, no matter the person, their behaviour followed what we call a “power law”.

Once power laws are present, bursts are unavoidable. Indeed, a power law predicts that most e-mails are sent within a few minutes of one other, appearing as a burst of activity in our e-mailing pattern. But the power law also foresees hours or even days of e-mail silence. In the end, the patterns of our e-mailing follow an inner harmony, where short and long delays mix into a precise law – a law that you probably never suspected you were subject to, that you never made an effort to obey, and that you most likely never even knew existed in the first place.

By mid-2004 my colleagues and I had observed a series of puzzling similarities between events of quite different natures, seeing bursts and power laws each time we monitored human behaviour. There were unexplained similarities between patterns of e-mail, Web browsing and sending jobs to a printer that demanded an explanation. For the rest of the summer I kept telling myself that there had to be a simple explanation to all of this. But my relentless probing produced nothing.

On the evening of 2 July 2004 I went to bed early, knowing that I would be getting up before dawn the next morning to travel to a conference in Bangalore. Yet, the excitement of my first trip to India kept me awake. And in that precarious twilight zone, not yet asleep but not really alert, I was suddenly struck by a simple explanation for the omnipresent bursts.

Setting priorities

The next day I returned to the musings of the night before. My twilight-zone idea had a simple premise: we always have a number of things to do. Some people use to-do lists to keep track of their responsibilities, while others are perfectly comfortable keeping them in their heads. But no matter how you track your tasks, you always need to decide which one to execute next. The question is, how do we do that?

One possibility is to always focus on the task that arrived first on your list. Waitresses, pizza-delivery boys, call-centre operators – just about everybody in the service industry practises this first-in-first-out strategy. Most of us would feel a deep sense of injustice if our bank, doctor or supermarket gave priority to the customer who arrived after us. Another possibility is to do things in their order of importance. In other words, to prioritize.

The idea that I had that night in July was deceptively simple: burstiness may be rooted in the process of setting priorities. Consider, for example, Izabella, who has six tasks on her priority list. She selects the one with the highest priority and resolves it. At that point, she may remember another task and add that to her list. During the day she may repeat this process over and over again, always focusing on the task of highest priority first and replacing it with some other job once it is resolved. The question I want to answer is this: if one of the tasks on Izabella’s list is to return your call, how long will you have to wait for your phone to ring?

If Izabella chooses the first-in-first-out protocol, then you will have to wait until she performs all of the tasks that cropped up before you. At least you know that you will be treated fairly – all the other items on her list will wait for roughly the same amount of time.

But if Izabella picks the tasks in order of importance, fairness is suddenly obsolete. If she assigns your message a high priority, then your phone will ring shortly. If, however, Izabella decides that returning your call is not at the top of her list, then you will have to wait until she resolves all tasks of greater urgency. As high-priority tasks could be added to her list at any time, you may well have to wait another day before hearing from her. Or a week. Or she may never call you back.

Once I put this priority model into a computer program, to my pleasant surprise the much-desired power law – the mathematical signature of bursts – appeared on my screen. The model consisted of a list of tasks, each randomly assigned a priority. Then I repeated the following steps over and over: (a) I selected the highest priority task and removed it from the list, mimicking the real habit I have when I execute a task; (b) I replaced the executed task with a new one, randomly assigning it a priority, mimicking the fact that I do not know the importance of the next task that lands on my list. The question I asked was, how long will a task stay on my list before it is executed?

As high-priority tasks are promptly resolved, the list becomes largely populated with low-priority tasks. This means that new tasks often supersede the many low-priority tasks stuck at the bottom of the list and so are executed immediately. Therefore, tasks with low priority are in for a long wait. After I measured how long each task waited on the list before being executed, I found the power law observed earlier in each of the e-mail, Web-browsing and printing data sets. The model’s message was simple: if we set priorities, our response times become rather uneven, which means that most tasks are promptly executed while a few have to wait almost forever on our list. I once saw a New Yorker cartoon that captured this sentiment: a businessman checks his diary and calmly says into his telephone “No, Thursday’s out. How about never – is never good for you?”.

Snail mail versus e-mail

After the publication of the priority model, I began to wonder whether the bursty pattern is a by-product of the electronic age or whether it perhaps reveals some deeper truth about human activity. All the examples we had studied so far – from e-mail to Web browsing – were somehow connected to the computer, raising the logical question of whether bursts preceded e-mail.

I soon realized that the letter-based correspondence of famous intellectuals, carefully collected by their devoted disciples, might hold the answer to this question. An online search pointed me towards the Albert Einstein Archives, a project based at the Hebrew University of Jerusalem that seeks to catalogue Einstein’s entire correspondence. Einstein left behind about 14,500 letters he had written and more than 16,000 he had received. This averages out to more than one letter written per day, weekends included, over the course of his adult life. Impressive though this is, it was not the volume of his correspondence that piqued my interest. In the spirit of the priority model, I wanted to find out how long Einstein waited before he responded to the letters he received.

It was João Gama Oliveira, a Portuguese physics student visiting my research group on fellowship, who first studied the data. His analysis showed that Einstein’s response pattern was not too different from our e-mail patterns: he replied to most letters immediately – that is, within one or two days. Some letters, however, waited months, sometimes years, on his desk before he took the time to pen a response. Astonishingly, Oliveira’s results indicated that the distribution of Einstein’s response times followed a power law, similar to the response times we had observed earlier for e-mail.

But it was not just Einstein’s correspondence that followed the pattern. From the Darwin Correspondence Project, hosted by the University of Cambridge in the UK, we obtained a full record of Charles Darwin’s letters. Given that the meticulous Darwin kept copies of every letter he either wrote or received, his record was particularly accurate. Its analysis indicated that he, too, responded immediately to most letters and only delayed addressing a very few. Overall, Darwin’s response times followed precisely the same power law as Einstein’s.

The fact that the records of two intellectuals of different generations (Einstein was born three years before Darwin’s death) living in different countries follow the same law implied that we were not looking at the idiosyncrasies of a particular person but at the basic pattern of pre-electronic communication. It also meant that it is completely irrelevant whether our messages travel on the Internet at the speed of light or are carried slowly across the ocean by steam engine. What matters is that, regardless of the era, we always face a shortage of time. We are forced to set priorities – even the greats, Einstein and Darwin, are not exempt – from which de_lays, bursts and power laws are bound to emerge.

Yet a peculiar difference remains between e-mail and letter-based correspondence: the exponent, the key parameter that characterizes any power law, is different for the two data sets. It turns out that in the power law P(τ) ~ τ–δ, which describes the probability P(τ) that a message waits τ days for a response, the exponent is δ = 1 for e-mail and δ = 3/2 for both Einstein’s and Darwin’s correspondence (see figure 1). This difference means that there are fewer long delays in e-mail correspondence than in letter writing – not a particularly surprising finding given the immediacy we often associate with electronic communication.

The truth, however, is that the difference can not be attributed to the different times it takes for letters and e-mails to be delivered. Research over many decades has told us that the exponent characterizing a power law cannot have arbitrary values but is uniquely linked to the mechanism behind the underlying process. That is, if a power law describes two phenomena but the exponents are different, then there must be some fundamental difference between the mechanisms governing the two systems. Therefore, the discrepancy meant that a new model was needed if we hoped to account for the letter-writing patterns of Einstein and Darwin.

A stack of letters

In our priority model, we assumed that as soon as the task of highest priority was resolved, a new task of random priority took its place. To derive an accurate model of Einstein’s correspondence, we needed to modify the model, incorporating the peculiarities of letter-based communication. With snail mail, each day a certain number of letters arrives by post, joining the pile of letters already waiting for a reply. And so, whenever time permitted, Einstein had chosen from the pile those letters he considered most important and replied to them, keeping the rest for another day. So a model of Einstein’s correspondence has two simple ingredients. The first is the probability with which letters landed on Einstein’s desk to which he assigned some priority, which we will call the arrival rate. (These letters increased the length of his queue, which differs from our previous e-mail model that had a fixed queue length.) The second ingredient in the model is the probability with which he chose the highest priority letter and responded to it, which we will call the response rate.

If Einstein’s response rate was faster than the arrival rate of the letters, then his desk was mostly clear, as he was able to reply to most letters as soon as they arrived. In this “subcritical” regime, the model indicates that Einstein’s response times follow an exponential distribution, devoid of long delays and clearly not in accordance with the observed power law.

If, however, Einstein responded at a slower pace than the rate at which the letters arrived, then the pile on his desk towered higher with each passing day. Interestingly, it is only in this “supercritical” regime that the response times follow a power law. Thus, burstiness was a sign that Einstein was overwhelmed and forced to ignore an increasing fraction of the letters he received.

Einstein and Kaluza

The phenomenon of burstiness, and its consequences, is clearly illustrated by Einstein’s correspondence with the theorist Theodor Kaluza. In the spring of 1919 Einstein received a letter from Kaluza, then unknown, who was still labouring to repeat the creative burst he had enjoyed in 1908. Back then, as a student of David Hilbert and Hermann Minkowski, he had written his first and only research paper. Now, a decade later and at the age of 34, he was still on the lowest rung of the academic ladder at the University of Königsberg, Germany, barely supporting his wife and child on a practically non-existent salary. When he finally finished his second paper, a rush of boldness prompted him to send it to Einstein, eliciting this encouraging reply on 21 April 1919: “The thought that electric fields are truncated…has often preoccupied me as well. But the idea of achieving this with a five-dimensional cylindrical world never occurred to me and may well be altogether new.”

Kaluza’s letter to the scientific great was more than a mere courtesy – he was asking for Einstein’s help to publish the manuscript. In those days, famous scientists like Einstein were the gatekeepers to the better scientific journals. If Einstein found the paper of interest, then he could present it at the Berlin Academy’s meeting, after which it could be published in the academy’s proceedings. To Kaluza’s joy, Einstein was willing.

Then, a week later, on 28 April, Einstein wrote a second letter to Kaluza. While encouraging, Einstein remained cautious. He would open the academy’s doors for Kaluza on one condition: “I could present a shortened version before the academy only when the above question of the geodesic lines is cleared up. You cannot hold this against me, for when I present the paper, I attach my name to it.”

Imagine the feelings elicited in Kaluza after having received two letters in as many weeks from the man who was already considered the most influential physicist alive. Both letters were encouraging, and the very fact that Einstein had bothered to write twice indicated that he was genuinely taken with the unknown physicist’s idea. But the letters were a mixed blessing and eventually prevented the paper’s publication for years.

In his response of 1 May 1919, Kaluza was quick to dispel Einstein’s concerns, prompting a further letter from Einstein on 5 May: “Dear Colleague, I am very willing to present an excerpt of your paper before the Academy for the Sitzungsberichte. Also, I would like to advise you to publish the manuscript sent to me in a journal as well, for ex[ample] in the Mathematische Zeitschrift or in the Annalen der Physik. I shall be glad to submit it in your name whenever you wish and write a few words of recommendation for it.”

What caused Einstein to change his mind so suddenly? His letter offers a hint: “I now believe that, from the point of view of realistic experiments, your theory has nothing to fear.” Kaluza could not have hoped for a better outcome. Einstein, well known for his neverending quest to confront all mathematical developments with reality, had accepted his conclusion that our world is five-dimensional.

Heartened by Einstein’s encouragement, Kaluza quickly made the requested changes and mailed back a shorter version of the paper, appropriate for presentation at the academy. His case looked really good now – he had received four letters in less than four weeks, indicating that the famous physicist had assigned him an unusually high priority. But then, in a letter dated 14 May 1919, Einstein unexpectedly gave him the cold shoulder. “Highly esteemed Colleague,” he wrote, “I have received your manuscript for the academy. Now, however, upon more careful reflection about the consequences of your interpretation, I did hit upon another difficulty, which I have been unable to resolve.” In a four-point derivation, Einstein proceeded to detail his concerns, concluding that “Perhaps you will find a way out of this. In any case, I am waiting on the submission of your paper until we have come to some resolution about this point.”

Kaluza made one final attempt to persuade Einstein of the validity of his approach, even daring to point out an error in Einstein’s arguments. This was met by a decisive reply from Einstein on 29 May 1919 in which he courteously told Kaluza that while he could not support his ideas due to continued reservations, he would gladly put in a good word should he wish to publish his findings so far. Despite the polite tone, the rejection was clear, and we know of no more exchanges between Einstein and Kaluza either that year or the next – and not because Kaluza’s paper was published. On the contrary, Einstein’s reservations sent an unmistakable message to the young scientist: the fifth dimension was a blunder, either premature or a dead end not worth further attention. After a furious burst of communication had ricocheted between the two men for a full month, a years-long silence followed.

Priority’s consequences

On 22 September 1919, four months after sending his last letter to Kaluza, Einstein shot to fame: the theory of general relativity that he had posed back in 1915 was finally confirmed by Arthur Stanley Eddington’s observation that light is bent as it passes by the Sun. Within days, Einstein’s name was on the front page of newspapers and magazines all over the world, and the Einstein myth was born. He turned into a media superstar and an icon.

Einstein’s sudden fame had drastic consequences for his correspondence. In 1919 he received 252 letters and wrote 239, his life still in its subcritical phase that allowed him to reply to most letters with little delay. The next year he wrote many more letters than in any previous year. To the flood of 519 he received, we have record of his having managed to respond to 331 of them – a pace that, though formidable, was insufficient to keep on top of his vast correspondence. By 1920 Einstein had moved into the supercritical regime, and he never recovered. The peak came in 1953, two years before his death, when he received 832 letters and responded to 476 of them. As Einstein’s correspondence exploded, his scientific output shrank. He became overwhelmed, burdened by delays. And with that his response times turned bursty and began to follow a power law, just as our e-mail correspondence does today.

Despite his brief correspondence with Einstein, Kaluza’s life improved little in the years that followed. He continued to work in academia but was unable to find a tenured position given his lack of publications. Then on 14 October 1921 he suddenly received a surprising postcard from Einstein: “Highly Esteemed Dr Kaluza, I have second thoughts about having you held back from publishing your idea about the unification of gravitation and electricity two years ago. Your approach certainly appears to have much more to offer than [Hermann] Weyl’s. If you wish, I will present your paper to the academy.” And he did, on 21 December 1921, two-and-a-half years after first learning of Kaluza’s idea.

Why this sudden reversal? Had Einstein simply been distracted by his triumph, forgetting for years about Kaluza’s extra dimension? No. The truth is that between 1919 and 1921 Einstein focused on pursuing other ideas that he had assigned higher priorities to. He had been furiously searching to codify his version of the “theory of everything”, following a direction originally proposed by Weyl. It was not until October 1921, when Einstein lost hope of success along those lines, that he returned to Kaluza’s still-unpublished paper and came to an embarrassing conclusion: he could not continue blocking the publication of Kaluza’s proposal while attempting to write his own paper inspired by it.

By the time Kaluza’s paper was eventually published, it was too late for its author. Discouraged by Einstein’s rejection, Kaluza had left physics and started anew in mathematics. But the professional switch eventually paid off – in 1929 he was offered a mathematics professorship at Kiel University and in 1935 became professor at Göttingen, one of the most prestigious universities of the time. Eventually, Kaluza’s multidimensional universe was revived in the 1980s and became the foundation for string theory, whose proponents have no fear of five-, 11- or many-more-dimensional spaces.

Sadly, Kaluza did not live to see the renaissance of his work, as he died in 1954. Might he have turned into one of the physics greats if Einstein had allowed him to publish his breakthrough early on? We will never know. But one thing is clear from Kaluza and Einstein’s brief encounter. Prioritizing is not without its consequences and it led to the demise of a young physicist’s career when his theories went ignored by the very man who could have got them published.

Discovering dark matter

The discovery of dark matter – the mysterious, invisible substance believed to make up more than 80% of the matter in the universe – would be a key moment in 21st-century physics. Hardly surprising, then, that so much attention was given to a paper written last year by the members of the Cryogenic Dark Matter Search (CDMS-II) detailing their evidence for dark matter (arXiv:0912.3592v1). The CDMS-II collaboration is looking for evidence of collisions between weakly interacting massive neutral particles (or WIMPs) – a leading candidate for dark matter – and nuclei of germanium in a detector in a mine in Soudan, Minnesota. The detector is located 700 m underground to minimize background noise from neutrons produced in cosmic-ray collisions, which can mimic real WIMP signals.

The CDMS-II collaboration strongly promoted the paper, which it submitted to arXiv on 18 December 2009. Five days before, the group circulated an e-mail flagging the upcoming paper and announcing a pair of talks that it had scheduled for 17 December. The talks – one at Fermilab and the other at the SLAC National Accelerator Laboratory – were arranged to start simultaneously. One was broadcast live over the Internet. Given the unusual lengths the CDMS-II collaboration was going to create a record of who said what and when, it was – an outside observer might conclude – about to stake a claim for discovering dark matter.

“All the physics blogosphere is abuzz,” reported the physics blog Cosmic Variance in early December as the big day approached. A film crew making a documentary about dark matter recorded the event, which was also reported by the mainstream media, including the New York Times. “The excitement in the air is palpable,” wrote Cosmic Variance blogger JoAnne Hewett an hour before the seminar started. “It looks like a signal talk,” Hewett’s colleague confided in her as the talk began.

Got sigma?

Not for long. CDMS-II spokesperson Jodi Cooley revealed that the researchers had found only two events, compared with 0.5 expected from background, yielding a confidence level of about 1.3σ, or 21%. Physicists normally expect more – at least 3σ, or 99.73%. “The results cannot be interpreted as significant evidence for WIMP interactions,” Cooley admitted in her talk, “but we cannot reject the possibility that either event is signal.”

The blogosphere crashed. Many acted betrayed. “They should have brought in Geraldo Rivera to open the signal box,” laughed one, referring to the former US talk-show host known for his melodramatic style. Some bloggers suggested that the collaboration had hyped its results to secure funding for a planned upgrade of its detector. Others thought it did so to stake a discovery claim given that XENON100 – a more sensitive, xenon detector in the Gran Sasso lab in Italy – had already begun to report results.

The scale of the build-up and let-down was itself a “signal” that something unusual was happening. The CDMS-II episode, it seemed to me, could tell us a lot about the use of statistics in science.

“Big deal,” said one physics colleague to whom I excitedly mentioned my idea. “It’s part of the business. We grapple with this kind of thing every day.” Yet to philosophers and historians, “the business” contains interesting features that physicists usually take for granted. What elements shape the role of statistics in discovery? Do researchers in different disciplines seek different levels of confidence in “a discovery”? Does an astronomer, say, want firmer evidence than a psychologist? Does suspicion of experimentalists and their methods sometimes inflate the acceptable confidence level?

The critical point

In January I devoted this column to what I saw as ambiguities in the discovery of dark energy, announcements of which were published by two different groups at slightly different dates in 1997/1998. I used the episode to examine the connection between publication date and discovery, deciding that sometimes “discoveries are not simple, unitary events made by a specific person at a specific place or time”.

This month I would like to follow a suggestion by Adam Riess, a key member of one of the two dark-energy discovery groups. Historians usually discuss credit after a discovery is made, Riess pointed out to me. But thanks to the fact that WIMP search results will roll in with increasing data from different sources in the next few years, historians have a unique opportunity to assess a discovery as it happens, allowing them to test models and assumptions.

Suppose, for instance, that XENON100, super-CDMS or some other dark-matter search turns up evidence for WIMPs at a confidence level of 2σ, or 95% confidence; will that count as a discovery? What about 3σ or 99.73%; or 5σ or 99.9999%? If the latter, will the scientific community consider the findings at lower confidence levels to warrant a partial claim to having seen WIMPs? And on what grounds could it plausibly refuse?

Then flip it around: suppose the 5σ result refutes WIMPs consistent with the previous findings – will these earlier results then be viewed as statistical flukes, incorrect claims or errors? And if so, on what grounds?

Please send me your thoughts, and I will write a follow-up column about them. If a finding of better than 5σ for WIMPs is finally established, then we will be able to compare our results with the judgment of the scientific community.

Finally, I also welcome your examples of findings that looked like promising discoveries before vanishing as more statistics were acquired – as well as non-findings that grew into findings with more statistics. Such cases have interesting implications about the nature of discovery and about who, ultimately, deserves the credit.

• What result do you think would constitute a “discovery” of dark matter? How then should we view the CDMS-II findings? Do you know of “discoveries” that grew into non-discoveries with more statistics, or vice-versa? Send your responses to Robert P Crease at the e-mail below

• To find out more about the search for dark matter, don’t miss the following exclusive video interviews.

Deep exploits — the search for dark matter

Going underground — life inside the Boulby lab

Copyright © 2026 by IOP Publishing Ltd and individual contributors