Skip to main content

Quirks of memory

In his declining years, Albert Einstein cut a saintly figure, moved to anger only when clamorous journalists, photographers and quote-hunters tried to disrupt his working routine. When they knocked on the door of his home in Princeton, he almost always refused to meet them, though his colleague Nandor Balazs once told me that the world’s most famous scientist could not resist talking with a young journalist who had travelled 10 000 miles from India to ask him a few questions about religion. As he got up from the kitchen table and prepared to meet her, he sighed to Balazs, “Oh well, I suppose I’d better go and play God again.”

Einstein was, however, a soft touch as a correspondent, and we now know that he spent many hours patiently replying to letters from people who tried their luck at getting him to put pen to paper. In 1953, two years before his death, one of his correspondents was John Moffat, then a 21-year-old physicist at the Niels Bohr Institute in Copenhagen. Moffat was working on unified field theory and his colleagues were giving him more derision than encouragement, so he wrote to Einstein for advice.

Moffat was surprised three weeks later when Einstein replied sympathetically, noting that “…every individual and every study circle has to retain its own way of thinking, if he does not want to get lost in the maze of possibilities”. This was to be the first of several courteous and helpful letters the Sage of Princeton wrote to Moffat, and now, some five decades later, the latter has understandably made it the centrepiece of his memoir Einstein Wrote Back.

Elsewhere in the book, Moffat – now an affiliate member of the Perimeter Institute in Canada – introduces us to the gallery of theoretical physicists he has met during his long career working in relativity and particle physics. These include Abdus Salam, Murray Gell-Mann, Robert Oppenheimer, Kurt Symanzik, Steven Weinberg and even an infant Edward Witten, whose father, Louis Witten, was leading the Research Institute for Advanced Study in Baltimore.

While exercising the memoirist’s prerogative of name-dropping on almost every page, Moffat simultaneously gives the impression of being an outsider. Although he has made the acquaintance of many stellar physicists, he has apparently never worked closely with them or got to know any of them intimately. He also has his share of prejudices. At one point he comments – I suspect, revealingly – on the modern, “draconian” system of peer review, without actually saying that he has often been on the wrong end of it. He wonders how the young Einstein “could ever have succeeded in publishing many of his iconoclastic papers” under such a system. The truth is that those papers had to pass muster with chief editor Max Planck, a theorist of generous spirit but conservative inclination.

Among the character portraits painted by Moffat, that of the brilliant if egoistic cosmologist Fred Hoyle is especially amusing. When Moffat visited him at St John’s College, Cambridge, he found Hoyle sitting at his desk, with a large oil painting of himself on the wall behind. Reflecting on the juxtaposition, Moffat wonders whether all of Hoyle’s students had to sit through his tutorials and “contemplate Hoyle’s blunt Yorkshire features in duplicate”. Later, Moffat had a similar experience in Hoyle’s cottage, where the great cosmologist intellectually disembowelled a divinity student over tea and cakes. The poor student was sitting beneath another oil painting of Hoyle hung over the fireplace.

Moffat’s stories are always entertaining, though some of them seem to have been a tad embellished in repeated re-tellings. For example, Moffat repeats an anecdote from Nathan Rosen, who recalled that when Einstein received a rejection letter from Physical Review, he “leapt out of his chair and threw the envelope with the letter and the manuscript into his trash can, which he kicked loudly around his office”. This story does lend support to Moffat’s view that Einstein’s work on unified theory led him to be “ostracized by his physicist peers”, but I suspect Rosen was exaggerating somewhat. Like all the other anecdotes in the book, this one is not referenced.

The picture Moffat paints of Paul Dirac mostly rings true, though, especially when he describes an excruciating scene in which Dirac sat at home in complete silence while Moffat waited in vain for conversation to begin. It is entirely believable, too, that Dirac’s wife told her husband that he was “so stupid” that he couldn’t “even put on [his] own trousers”. However, I find it hard to picture Dirac uttering the chirpy greeting to Pauli quoted here: “I have been reading your recent work on quantum field theory, Wolfgang.” Nor do I find it easy to imagine him spending afternoons in the garage “repairing his old Rolls-Royce”. Dirac had no taste for car maintenance or any other mechanical chore and never, according to his family, owned such a fancy car.

Pauli, a brilliant if charmless critic of everyone’s new ideas, including his own, makes several appearances here, always as a charismatic figure, though devoid of grace. At one point, Moffat gives us a vivid, detailed account of the Austrian theorist’s visit to Abdus Salam’s new group at Imperial College, London. This was apparently a red-letter day for the dapper Salam, who put on “what appeared to be a new, dark, three-piece suit, white shirt and St John’s College tie, with his black hair and moustache gleaming”. Moffat tells us that the visit took place in 1959, so it appears that Salam and his students were welcoming a ghost – Pauli having died the previous year.

In Einstein Wrote Back, Moffat has given physicists a wealth of new stories about several of its greatest characters. It is a pleasant read, but it would have been even better if he had submitted himself to the discipline of a tougher editor.

Energy balance points to man-made climate change

A climate model based on the “global energy balance” has provided new evidence for human-induced climate change, according to its creators. Using this simple model, researchers in Switzerland conclude that it is extremely likely (>95% probability) that at least 74% of the observed warming since 1950 has been caused by human activity.

Previously, climate scientists have used a technique called “optimal fingerprinting” to pinpoint the causes of global warming. This involves using complex models to simulate the climate response to different “forcings”. These include greenhouse gases, aerosols and ozone, as well as natural factors such as solar and volcanic variability. The relative contribution of each forcing is then assessed by a statistical comparison of the model outputs to the real-life warming pattern.

However, this method relies on the ability of climate models to accurately simulate the response patterns to each forcing, and also assumes that the responses can be scaled and added. Furthermore, changes in the energy balance of the climate system are not explicitly considered.

A conservative model

Now, Reto Knutti and Markus Huber at the Institute for Atmospheric and Climate Science in Zurich, Switzerland, have developed a model based on the simple fact that Earth’s energy must be conserved. When the Earth is in equilibrium, the thermal energy it emits is equal to the amount of energy received from the Sun. However, evidence shows that this energy balance has become disrupted, with less energy being emitted back into space. The trapped energy in the climate system thus acts to heat up our planet, causing a rise in global temperature.

The researchers used their energy-balance model to investigate the cause and magnitude of this warming. The model, driven by observational records of climate forcings, surface temperature and ocean heat uptake, was run many thousands of times with different parameter combinations. The combinations that best matched the observations were then fed through the model a second time in order to simulate the climate response to each individual forcing.

The model predicts a global temperature increase of 0.51 °C since the 1950s, similar to the observed estimate of 0.55 °C. Greenhouse gases provide the largest contribution to this warming, responsible for a temperature increase of 0.85 °C, with approximately half of this greenhouse warming offset by the negative forcing of aerosols. On the other hand, the contribution of solar and volcanic forcing was close to zero.

Different but similar

The model was also used to simulate the future evolution of the climate system. A temperature increase of 1.29 °C was found for 2050–2059 compared with the 2000s, almost entirely due to greenhouse gases, with carbon dioxide being the dominant contributor.

These findings are consistent with the latest report by the Intergovernmental Panel on Climate Change (IPCC), as well as other studies that use the optimal fingerprinting approach. The researchers believe that their energy-balance model can be used in tandem with alternative climate-attribution methods.

“We don’t criticize optimal fingerprinting – it is a very powerful technique – but to almost all people it’s a black box,” says Reto Knutti. “It’s statistically complex, makes a number of assumptions and is not physically intuitive. At least to a physicist, conservation of energy is fundamental. The fact that our results are entirely consistent with optimal fingerprinting is an argument for even higher confidence in human-induced climate change.”

“Independent evidence”

Paul Williams, a Royal Society University Research Fellow in climate modelling at the University of Reading, UK, agrees that the model is a useful tool. “Even the most hardened climate sceptic with a basic knowledge of physics could not possibly object to the application of energy conservation to the climate problem,” he says. “The energy-balance method provides further independent evidence for the anthropogenic origin of the majority of 20th-century global climate change.”

This study comes less than two months after the Berkeley Earth Surface Temperature project announced its preliminary findings. Motivated by criticisms of the current temperature datasets, this independent study is building a historical temperature record from scratch, using measurements from more than 39,000 stations. Its first results, for land-surface temperatures only, reveal a global warming of 0.91 °C over the past 50 years, in close agreement with previous observational records. And now, the study by Knutti and Huber provides new evidence that this warming is due to human activity.

The research is described in Nature Geoscience 10.1038/NGEO1327.

String-theory calculations describe ‘birth of the universe’

Researchers in Japan have developed what may be the first string-theory model with a natural mechanism for explaining why our universe would seem to exist in three spatial dimensions if it actually has six more. According to their model, only three of the nine dimensions started to grow at the beginning of the universe, accounting both for the universe’s continuing expansion and for its apparently three-dimensional nature.

String theory is a potential “theory of everything”, uniting all matter and forces in a single theoretical framework, which describes the fundamental level of the universe in terms of vibrating strings rather than particles. Although the framework can naturally incorporate gravity even on the subatomic level, it implies that the universe has some strange properties, such as nine or ten spatial dimensions. String theorists have approached this problem by finding ways to “compactify” six or seven of these dimensions, or shrink them down so that we wouldn’t notice them. Unfortunately, Jun Nishimura of the High Energy Accelerator Research Organization (KEK) in Tsukuba says “There are many ways to get four-dimensional space–time, and the different ways lead to different physics.” The solution is not unique enough to produce useful predictions.

These compactification schemes are studied through perturbation theory, in which all the possible ways that strings could interact are added up to describe the interaction. However, this only works if the interaction is relatively weak, with a distinct hierarchy in the likelihood of each possible interaction. If the interactions between the strings are stronger, with multiple outcomes equally likely, perturbation theory no longer works.

Matrix allows stronger interactions

Weakly interacting strings cannot describe the early universe with its high energies, densities and temperatures, so researchers have sought a way to study strings that strongly affect one another. To this end, some string theorists have tried to reformulate the theory using matrices. “The string picture emerges from matrices in the limit of infinite matrix size,” says Nishimura. Five forms of string theory can be described with perturbation theory, but only one has a complete matrix form – Type IIB. Some even speculate that the matrix Type IIB actually describes M-theory, thought to be the fundamental version of string theory that unites all five known types.

The model developed by Sang-Woo Kim of Osaka University, Nishimura, and Asato Tsuchiya of Shizuoka University describes the behaviour of strongly interacting strings in nine spatial dimensions plus time, or 10 dimensions. Unlike perturbation theory, matrix models can be numerically simulated on computers, getting around some of the notorious difficulty of string-theory calculations. Although the matrices would have to be infinitely large for a perfect model, they were restricted to sizes from 8 × 8 to 32 × 32 in the simulation. The calculations using the largest matrices took more than two months on a supercomputer, says Kim.

Physical properties of the universe appear in averages taken over hundreds or thousands of matrices. The trends that emerged from increasing the matrix size allowed the team to extrapolate how the model universe would behave if the matrices were infinite. “In our work, we focus on the size of the space as a function of time,” says Nishimura.

‘Birth of the universe’

The limited sizes of the matrices mean that the team cannot see much beyond the beginning of the universe in their model. From what they can tell, it starts out as a symmetric, nine-dimensional space, with each dimension measuring about 10–33 cm. This is a fundamental unit of length known as the Planck length. After some passage of time, the string interactions cause the symmetry of the universe to spontaneously break, causing three of the nine dimensions to expand. The other six are left stunted at the Planck length. “The time when the symmetry is broken is the birth of the universe,” says Nishimura.

“The paper is remarkable because it suggests that there really is a mechanism for dynamically obtaining four dimensions out of a 10-dimensional matrix model,” says Harold Steinacker of the University of Vienna in Austria.

Hikaru Kawai of Kyoto University, Japan, who worked with Tsuchiya and others to propose the IIB matrix model in 1997, is also very interested in the “clear signal of four dimensional space–time”. “It would be a big step towards understanding the origin of our universe,” he says. Although he finds that the evolution of the model universe in time is too simple and different from the general theory of relativity, he says the new direction opened by the work is “worth investigating intensively”.

Will the Standard Model emerge?

The team has yet to prove that the Standard Model of particle physics will show up in its model, at much lower energies than this initial study of the very early universe. If it leaps that hurdle, the team can use it to explore cosmology. Compared with perturbative models, Steinacker says, “this model should be much more predictive”.

Nishimura hopes that by improving both the model and the simulation software, the team may soon be able to investigate the inflation of the early universe or the density distribution of matter, results which could be evaluated against the density distribution of the real universe.

The research will be described in an upcoming paper in Physical Review Letters and a preprint is available at arXiv:1108.1540.

Check these collisions out

By Matin Durrani

With all those rumours flying around of possible sightings of the Higgs boson in among the proton–proton collisions at CERN’s Large Hadron Collider, you might find this video of a very different type of collision interesting.

It involves not protons but pool balls, as performed by Philadelphia-based “professional pool trick-shot artist” Steve Markle.

“The trick shots I do are an excellent showing of defining the laws of physics,” Markle claims.

And if you think pool balls behave in a pretty predictable way according to the rules of classical physics, well yes they do, but it’s still surprising to see what some good old-fashioned spin can do. Take a look, for example, at 3.46 min, when Markle manages to bend a pool ball in a curve through an entire 90° angle.

And if you want to see Markle in action for real, he’s due to be performing at the Artistic Pool World Championship (yes, there is such a thing) in Oaks, Oaklahoma next March.

As for whether the Higgs is going to show up at CERN, you’ve now got just a week to wait. In the meantime, these pool-ball collisions are sure to keep you amused.

Cocktail physics

Cocktail glass containing a purple liquid, which is emitting dramatic white, smoky wisps

We are in the midst of a culinary revolution, as high-end chefs around the world exploit scientific knowledge and technological advances to create spectacular dishes. Ferran Adrià, known for his world-acclaimed restaurant El Bulli in Catalonia, has pioneered the use of hydrocolloids to create yogurt spheres, carrot foam and other novel foods. Other chefs, such as Heston Blumenthal at the Fat Duck in Bray, UK, Grant Achatz at Alinea, Chicago, and Wylie Dufresne at wd~50, New York, are exploring science-based techniques, including the use of liquid nitrogen, enzymes and controlled temperature baths, to create remarkable juxtapositions of new flavours and unexpected textures.

The same trend is happening, in parallel, with cocktails. For years bartenders have relied on trial and error to refine recipes, but now the same techniques that fuelled the culinary revolution are allowing a more systematic approach to developing new drinks. Tools and techniques borrowed from research laboratories in physics and chemistry, such as rotary evaporators, thermocouples and centrifuges, are helping bartenders to put their innovative drinks ideas into practice. Concepts from thermodynamics as well as the physics of colloids, gels and other forms of “soft matter” can help explain the flavour, appearance and “mouthfeel” of these beverages. So get your cocktail shakers and bar spoons ready as we take you through what you need to know to create cocktails that look, taste and feel fantastic.

Full of flavour

Be it in beer, wine or spirits, the physical properties of ethanol, especially its solubility and volatility, help to deliver flavours that are impossible to achieve using water alone. What we think of as “flavour” actually has two main components: taste and aroma. As food-science author Harold McGee puts it, “Tastes provide the foundation of flavour, and aromas provide the tremendous variety.” Although we can perceive just five basic tastes on the tongue (sweet, sour, salty, bitter and savoury), there are thousands of aromas that we can sense through olfactory receptors in the nose – be they the caramel notes of rum or the oaky smell of bourbon.

Alcohol is far more effective than water at delivering these aromatic components, since typically they are not especially water-soluble. Water molecules are polar and so prefer to be near other polar molecules to minimize their interaction energy. This encourages nonpolar molecules, such as the aromatics, to leave the liquid phase and vaporize into the surrounding air, where they contribute to the aroma of the drink. The presence of ethanol mediates this polar/non-polar interaction and allows high concentrations of aromatics to remain in an aqueous solution. For this reason, ethanol is used to extract and deliver flavours from a range of sources, including flowers, spices, nuts, fruits and herbs.

Distilled alcoholic liquids, called spirits, are the essential component of any cocktail. Naturally fermented alcoholic beverages, such as beer and wine, rarely exceed about 20% ethanol by volume, since higher levels are toxic to most of the strains of yeast that produce them. Higher concentrations must therefore be reached through distillation, in which the fermented beverage is heated to preferentially extract the ethanol, which has a lower boiling point than water. The plant material used during fermentation, such as molasses for rum or agave for tequila, gives an intense flavour to the final distilled beverage. Additional plant materials supplied during or after distillation, such as the juniper berries used for making gin, also contribute to the flavour. Because of the high concentration of aromatic molecules extracted from the plants during the production process, spirits are some of the most intensely flavoured foods. Indeed, only a few drops of Chartreuse, a French liqueur made using nearly 130 herbal extracts, can entirely change a cocktail’s flavour.

A rotary evaporator

Distillation has been used for thousands of years to create spirits, going as far back as Mesopotamia and ancient China, but continues to be improved through applications of scientific knowledge. For example, some bartenders, such as the award-winning Tony Conigliaro of London bar 69 Colebrooke Row, are experimenting with a device commonly found in the science lab – the rotary evaporator (figure 1). This device extracts aroma molecules that would otherwise be destroyed by the higher temperatures in traditional distillation techniques. The rotary evaporator lowers the pressure inside a rotating container holding the liquid to be distilled, causing the more volatile components to evaporate and leaving behind the undesirable water, sugar, pigments and other large molecules. A condensing coil uses a coolant to turn the vapours back into a liquid – the final intensely flavoured product – which is collected in a separate flask. A habañero liqueur is one illustrative example: the capsaicin that makes the chilli taste so hot is non-volatile, so only the fruity and floral compounds end up in the distillate, yielding a liqueur that retains all the flavour of the chillies but without any nasty burn.

Another way to intensely flavour spirits is to soak ingredients in high concentrations of ethanol, thereby infusing the aromatics into the alcohol. This process traditionally requires many days for the ethanol to fully penetrate the ingredients and extract the desired compounds. Now, however, flavour infusion can be achieved in just a few minutes, using a technique pioneered by Dave Arnold, author of the blog Cooking Issues and director of culinary technology at the French Culinary Institute in the US. Coffee-flavoured vodka, for example, can be made by combining ground coffee beans and vodka in a whipped-cream dispenser – a pressurized device typically used to create foams such as whipped cream at the touch of a button, now well known by the commercial name “iSi Whip”. What happens is that nitrous oxide, which is also in the canister and under high pressure, dissolves in the vodka. The high pressure of the liquid displaces any air bubbles in the coffee grounds. When the pressure is released, the nitrous oxide rapidly bubbles out of the solution, just as when a can of carbonated drink is opened. Releasing these bubbles draws flavour molecules from the coffee grounds into the vodka, flavouring the alcohol and turning it brown. This versatile technique works for a range of porous substances, such as cocoa nibs and a variety of herbs.

By combining spirits with other ingredients, a full spectrum of flavours can be achieved. Tastes can be added through the sweetness of syrups, the sourness of citrus juice, the salt around the rim of a glass or numerous other methods. Aromas can be enhanced with a variety of highly concentrated alcohol-based solutions called tinctures and bitters. Compared with mixed drinks, there is less flexibility in what can be produced with beer or wine because their flavours can only be manipulated through the fermentation and ageing process.

Hot or cold

Whether by dare or by choice, many of you will have experienced the hot, burning sensation you get in your throat and chest if you drink neat vodka or tequila. In fact, too much spirits in a cocktail can overwhelm the desired mix of flavours. The alcohol burn can, however, be reduced by lowering the temperature of the beverage, which is why aquavit, vodka and other straight spirits are often served cold, at temperatures of around –18 °C. Unfortunately, such low temperatures can also diminish the perception of the other tastes and aromas in the drink, so most mixed drinks are served at somewhat warmer temperatures.

The flavour of a drink also depends on how dilute it is, which in practical terms means how much ice has been mixed into it. Vigorously shaking the mixture rapidly cools the drink within seconds, whereas cooling can take upwards of a few minutes if it is only gently stirred. In both cases, the final temperature of the diluted mixture can be several degrees below the initial temperature of the ice, for essentially the same reason that roads are de-iced by spreading salt on them. Because the entropy of the diluted mixture is far larger than the entropy of the crystalline ice, the ice continues to melt and absorb heat from the mixture even as the mixture cools below 0 °C.

The precise temperature of the drink also strongly affects the complex balance between these flavours. A chilled martini, for example – consisting of gin and vermouth – is crisp and balanced, whereas the gin can overwhelm the flavour near room temperature. As McGee explains, “the bartender’s challenge is to make drinks that have a balanced taste foundation and aromas that suit that foundation, and retain that overall structure reasonably well over the drink’s lifetime, as it becomes diluted or warms up”.

Appearance is everything

Two manhattan cocktails

The flavour of a cocktail is of course important but its appearance and texture also contribute to the overall experience of the drink – be it the layers of the graphically named squashed frog, the creaminess of an eggnog or the showiness of a blue blazer, which is poured between two cups after being set on fire. Flames and decorations aside, a cocktail’s appearance results from a combination of its colour and opacity, both of which can be controlled by the bartender. For a coloured drink, the mixologist selects ingredients that absorb specific wavelengths of light. For example, a rich brown can be obtained using a spirit that has been aged in oak barrels, as this imparts pigment molecules that produce this colour. If you want the finished drink to be clear, all the pigments and particulates must be removed, to prevent light absorption or scattering.

But even with clarified components, the mixing technique can have a dramatic impact on the light-scattering properties of the finished drink. For example, a manhattan, which contains whisky, vermouth and bitters, can become cloudy when shaken. This results from small air bubbles introduced into the beverage while shaking, which are then stabilized by the bitters. A stirred manhattan, in contrast, is clear (figure 2), which is why it is typically served stirred, not shaken, unlike James Bond’s martinis. As for drinks that are cloudy, their appearance is often caused by the presence of small particulates, although these can be removed by a variety of clarification techniques. Surprisingly, the most common method of clarification – filtration – is rarely used. Instead, some technology-minded bartenders are using other techniques such as centrifugation, which rapidly produces a clear liquid by accelerating the settling of particulates. Indeed, this technique is a particularly good way of clarifying lime juice, which can then be used for transparent gin and tonics or clear, stirred margaritas. Another technique, also developed by Arnold, uses gels made from agar – a naturally occurring polysaccharide – to trap particulates from citrus juices and other non-transparent liquids. Water is boiled with agar to hydrate it, the juice is then added and the solution is allowed to cool to form a gel. The longer pectin fibres and other plant materials become trapped in the agar gel, and a clear liquid weeps out, which contains the much smaller flavour molecules.

Half sinner, half saint cocktail

There is also plenty of interesting physics going on in cocktails that include anise-flavoured spirits such as pastis, ouzo and absinthe, which contain water-insoluble anethole compounds. Although the anethole dissolves in ethanol because of the alcohol’s unique structure, when these compounds are diluted with water they are no longer soluble, so they form spontaneous emulsions. What happens here is that a highly concentrated suspension of microscopic droplets has been created in the drink that strongly scatters light. Because the droplets are small, these emulsions are stable for months without having to add any stabilizing “surfactant” molecules. This effect is exemplified in a drink called half sinner, half saint, in which a layer of absinthe is floated on top of a mixture of sweet and dry vermouth. The absinthe spreads downwards, leading to a white layer, caused by the droplets of anethole that travel from the top to the bottom of the glass over the course of several minutes (figure 3).

Tactile textures

In addition to flavour and appearance, the “mouthfeel” of a drink is another parameter manipulated by bartenders. Incorporating air via shaking results in a more viscous texture. Egg whites are used in fizzes and sours to stabilize these air bubbles. An extreme example is a Ramos gin fizz, which calls for an exhausting 12 minutes of shaking in the original recipe. The effort is worth it, however, as it results in an extremely creamy, frothy texture. A layer of foam protrudes several centimetres above the rim of the glass and is stiff enough to hold a metal straw vertically at its centre. The long mixing time is needed to divide the air into progressively smaller bubbles, resulting in a stiffer foam. Another class of drinks, called flips, uses whole egg to form an emulsion, leading to a more creamy texture.

Several of the chefs leading the innovations in haute cuisine are also pushing the frontiers of texture in cocktails. Adrià serves several novel types of cocktail in his establishments, including a hot and cold gin fizz (see below). Instead of the lengthy shaking of the Ramos gin fizz, an iSi Whip introduces nitrous-oxide bubbles into the top foam layer, which sits on top of a frozen juice layer. At Grant Achatz’ bar, Aviary, Chicago, the cocktail chefs use techniques developed in the Alinea kitchen to create novel forms for the drinks. For instance, they use a modified starch called tapioca maltodextrin to produce a powdered gin and tonic and ultralow temperatures to make a chewy Pisco sour. Other mixologists, such as Eben Freeman of the Altamarea group, use similar techniques to create a variety of solid cocktails.

These elements of flavour, appearance and texture all contribute to the final perception of the drink. Classic cocktail recipes have survived and evolved as we have learned to improve the balance of these components. But today’s bartenders are seeking inspiration from science to improve these recipes and to invent new concoctions. So let’s all raise a glass to science!

At a glance: cocktail physics

• Having perfected yoghurt spheres, carrot foam and other novel foods using new technological tools, some chefs are now turning their attention to cocktails
• Cocktails have traditionally been developed by trial and error but can now be understood in terms of thermodynamics and soft-matter physics
• The physical properties of ethanol, the basis of every spirit, enable the delivery of flavours impossible to achieve using water alone
• Equipment such as rotary evaporators and whipped-cream dispensers are now used to extract flavours, along with traditional distillation and soaking methods
• The appearance and texture of drinks can be controlled by clarification, the decision to stir or shake, or the production of foam, for example using egg whites to stabilize air bubbles

Get mixing!

Here is a cocktail recipe you can try for yourself, from leader in the field Ferran Adrià.

Hot and cold gin fizz

Ingredients
For the base syrup:
150 g sugar
150 g water
For the frozen lemon mix:
250 g lemon juice
150 g base syrup (see above)
150 g gin
For the hot lemon foam:
150 g egg whites
130 g lemon juice
70 g gin
145 g base syrup (see above)

Equipment
0.5 litre iSi Whip
1 cartridge of N2O

Method
For the base syrup:
Mix ingredients and bring to a boil.
Remove from heat, cool, then refrigerate.
For the frozen lemon mix:
Mix all ingredients cold, then freeze.
Once frozen, blend in a blender until fluid. Keep in the freezer.
For the hot lemon foam:
Break egg whites with a whisk. Add the remaining ingredients. Strain and pour into the iSi Whip using a funnel. Load the iSi Whip and keep in a water bath at 80 °C, shaking occasionally.
To serve, 3/4 fill a cocktail glass with frozen lemon mix. Top up with hot foam.

More about: cocktail physics

D Arnold Cooking Issues www.cookingissues.com
T Conigliaro Drink Factory http://drinkfactory.blogspot.com
H McGee 2004 On Food and Cooking (Scribner, New York)
N L Sitnikova, R Sprik and G Wegdam 2005 Spontaneously formed trans-anethol/water/alcohol emulsions: mechanism of formation and stability Langmuir 21 7083

Voyager sets sights on Milky Way

An international group of researchers has reported the first ever detection of a particular atomic hydrogen line emission from the Milky Way, using instruments on board the Voyager spacecraft. This emission, has already been seen from much more distant objects and is used to find star-forming galaxies. It is also used to probe the epoch of reionization – the formation of the first stars post the “dark ages” of the universe – and so is of considerable significance to astronomers.

Astronomers routinely look at astronomical objects and phenomena that are many millions of light-years away – currently, the furthest object observed is at a distance of 13.14 billion light-years. But surprisingly, there are phenomena that occur within the bounds of our parent galaxy – the Milky Way – that have not been seen or studied. An example of this is the “Lyman-alpha (Lyα) emission” (121.6 nm) that is generated when there is an electron transition between the first and second energy levels of hydrogen. The galactic Lyα emission is an essential marker of the young-star-formation rate in the Milky Way, the ionization environment in which the atmospheres of young planets evolve, and the amount of shocked gas in the interstellar medium.

Star birth marker

Although Doppler-shifted Lyα lines emission has been seen from other galaxies, it has been undetectable for the Milky Way as a result of very bright local sources that drown out the galactic Lyα radiation, in a similar manner to which the glare of bright lights from a city blocks the light from all but the brightest of stars. This local brightness is mainly attributed to the “H glow” – solar Lyα photons scattered by neutral hydrogen gas in the solar system. Like getting away from a big city to see a clear sky, astronomers are using data from the two Voyager spacecraft – both of which have now reached the heliosheath at the very edge of the solar system and are beyond the worse of the H glow.

Using the recently acquired data from the Voyager spacecraft, launched by NASA in 1977, Rosine Lallement from the Observatoire de Paris run by the French research council (CNRS), and colleagues in the US and Russia are the first to study this galactic emission and confirm that most of it originates in the regions where hot young stars are being formed. As both spacecraft are moving out of the solar system, they can perceive the much fainter radiation that comes from the galaxy. “For us, it is like beginning to see small candles within a brightly lit room,” explains Lallement. The team has been busy “disentangling the two [local and distant] signals” that have two different consequences.

Glowing Milky Way

In the distant case, the astronomers study the amount of ultraviolet Lyα radiation emitted by a galaxy because it corresponds to the rate at which stars are being born within that galaxy – that is, the star formation rate (SFR) of the galaxy. Lallement explains that one of the major goals for astronomers is to pinpoint when stars first appeared in the young universe, and so detecting the Lyα emissions from the most distant galaxies and correctly interpreting the signal is essential. “However, the correspondence between Lyα and the SFR is not an easily derived due to the complex manner in which the radiation propagates through the distant galaxy. A single Lyα photon can be absorbed and scattered by the numerous neutral hydrogen clouds present within galaxies, and hence its history is essentially lost due to its complex “random walk” from its origin to its escape from the ionized regions of a distant galaxy.”

She goes on to say that the star, gas and dust distributions are not known in those galaxies that are extremely faint and so accurately observing and deciphering the Lyα galactic signal to test and calibrate Lyα photon propagation models for distant galaxies was impossible. “In the case of the Milky Way we have for the first time the Lyα signal and all the necessary information, thus models can be tested,” says Lallement. The Lyα emission can trace the SFR in much more distant galaxies, with redshifts from z = 2 to z = 6.

Closer to home

The study of the other signal – the local signal – is important to understand the heliosphere – the bubble formed due to the solar wind that contains our entire solar system and marks the extent of the Sun’s environment, including the boundary between the Sun and the ambient galactic interstellar medium. Both Voyager spacecraft are currently crossing the boundary region, moving into the galactic gas. “The H glow due to penetrating neutral hydrogen atoms from interstellar space is part of the whole structure. Understanding how this local H glow evolves with the distance to the Sun and confirming the best models of the glow brings information that is complementary to in situ data”, says Lallement, explaining that although the data from the Voyager spacecraft shows a preliminary distribution of the emissions; precise maps will have to wait a dedicated mission. NASA’s New Horizons spacecraft, on its way to Pluto, has an ultraviolet-imaging spectrometer that could observe galactic Lyα emission in a more systematic way.

Unfortunately, power on board the Voyager spacecraft decreases as they move further and further away from the Sun. Indeed, no data will be received beyond 2020–2025. To save power, the UV instruments on board are no longer capable of pointing towards a certain source; data are still recorded but in a fixed direction. Hopefully, they will still generate new and useful information about the galactic Lyα emissions as well as the interstellar gas until then.

The research is published in Science 10.1126/science.1197340.

Higgs rumours fly as meeting approaches

CERN art


Physicists chatter excitedly at CERN. (Courtesy: Georges Boixader)

By Hamish Johnston
The particle-physics rumour mill is going into overdrive as physicists look forward to next week’s meeting of the CERN’s Scientific Policy Committee – which will include Higgs updates from the LHC’s ATLAS and CMS experiments.

If various blogs are to be believed – and a trusted source assures us that the claims are credible – the two experiments are closing in on the Higgs boson. This undiscovered particle and its associated field explain how electroweak symmetry broke after the Big Bang and why some fundamental particles are blessed with the property of mass.

The latest rumour is that both ATLAS and CMS have evidence that the Higgs mass is about 125 GeV/C2 at confidence levels of 3.5σ and 2.5σ respectively. At 3.5σ, the measurement could be the result of a random fluke just 0.1% of the time whereas at 2.5σ the fluke factor is about 1%.

If you are really optimistic, I believe you can add these two results together in quadrature to get an overall result with a significance of 4.3σ.

While these might sound like fantastic odds to you and me, particle physicists normally wait until they have a confidence of 5σ or greater before they call it a “discovery”. Anything over 3σ is described as “evidence”.

Blogger Lubos Motl has reproduced what he says is an e-mail from CERN director general Rolf-Dieter Heuer inviting CERN staff to a briefing on 13 December to hear about “significant progress in the search for the Higgs boson, but not enough to make any conclusive statement on the existence or non-existence of the Higgs”.

This seems to tie in nicely with the rumoured ATLAS and CMS results, which together are strong evidence for the Higgs at about 125 GeV/C2 – but not yet a discovery.

So why are particle physicists so conservative when it comes to claiming a discovery?
Last year, Robert Crease explored this issue in his regular column for Physics World, and you can read that column here.

Crease wisely cites past experience as the number-one reason for caution. Indeed he quotes University of Oxford physicist and data-analysis guru Louis Lyons as saying “We have all too often seen interesting effects at the 3σ or 4σ level go away as more data are collected.”

As Crease points out, nearly everyone he spoke to in writing his article “had tales – many well known – of signals that went away, some at 3σ: proton decay, monopoles, the pentaquark, an excess at Fermilab of high-transverse-momentum jets”.

So if the rumours are true, 2012 could be a very exciting year for the LHC as more data are collected and this interesting effect grows. But until the key CERN briefing on 13 December, when more information emerges, it would be wise to “keep calm and carry on”.

Blogs that have been buzzing include those of Philip Gibbs, Lubos Motl and Peter Woit.

Cavity spectroscopy does carbon dating

A new way to carbon-date old samples has been developed by physicists in Italy. Unlike current methods, which involve large and costly laboratory equipment, the new technique can be performed using portable and low-cost equipment. The researchers claim that their idea could have other applications, including biomedical procedures, environmental monitoring, fundamental physics and explosives detection.

Carbon dating is an essential tool of modern archaeology because it allows the age of a biological sample to be determined from the radioactivity of its carbon compounds. Carbon-14 is a radioactive isotope produced in the upper atmosphere by cosmic rays and accounts for one in every 1012 atoms of carbon in every living organism. When an organism dies, it stops taking in carbon, so the number of carbon-14 atoms decreases with a half-life of about 5730 years – a timescale that makes it ideal for investigating human history.

However, for samples aged around 50,000 years or older, accelerator mass spectrometry (AMS) is the only method that is sensitive enough to detect the minute amounts of remaining carbon-14. This involves ionizing the carbon compounds, accelerating them to extremely high energies with a particle accelerator and bending the ions’ paths with an electric field. The equipment required to do this is extremely large and costly.

Focusing on vibrations

Now, Paolo de Natale and colleagues of the National Institute of Optics and the European Laboratory for Non-Linear Spectroscopy, both in Florence, Italy, have developed a much cheaper alternative that they say is almost as sensitive. This new method is based on infrared laser spectroscopy, which probes the quantized vibrational modes of molecules. A specific type of molecule will absorb infrared light only at energies corresponding to its vibrational modes. Therefore, the concentration of a particular molecule in a sample can be measured by tuning a laser to the appropriate energy and measuring how much light is absorbed.

The vibrational modes of carbon dioxide differ slightly depending on whether the carbon atom is carbon-14 or standard carbon-12. By burning a sample and performing infrared laser spectroscopy on the carbon dioxide produced, scientists can work out the proportion of carbon-14 in the original sample and thus deduce its age. However, owing to fluctuations in the output of the laser, this has never been sensitive enough to use for dating.

De Natale and colleagues have overcome this problem by using a technique they first unveiled last year, called saturated absorption cavity ring-down (SCAR) spectroscopy. SCAR involves firing the laser into a cavity with a mirror at either end – essentially “filling” the cavity with light that bounces back and forth. The laser is switched off and a measurement is made of the rate at which the intensity of the light in the cavity decays or “rings down”. A SCAR measurement is made with the cavity filled with the carbon-dioxide sample, which affects the decay rate because light is absorbed by the molecules. Because the laser light is injected into the cavity in advance and switched off during the measurement, SCAR is not affected by fluctuations in laser intensity. Another benefit of the technique is that the multiple reflections ensure that the light interacts with the gas for a much longer time than if the laser were just fired through a sample.

Room for improvement

By measuring how quickly the sample absorbed the laser light, the researchers were able to work out the proportion of carbon-14 that it contained. They measured concentrations of radiocarbon as low as four parts in 1014. The very best AMS machines – which are 10 times as expensive and 100 times the size of the SCAR prototype – can achieve one part in 1015, but De Natale believes their system can be improved further. He points out that carbon-14 is also used for biomedical applications in which AMS accuracy levels are not needed. “There are small-scale mass spectrometers that could in principle be replaced [by SCAR] now, even though the ultimate sensitivity is not yet at the level of the best mass spectrometers,” he explains.

De Natale says that, in future, the technique could be adapted to detect tiny quantities of other rare molecules. This could allow it to be used to monitor the concentrations of hazardous pollutants in the environment, to detect drugs or explosives on passengers or in cargo at airports or to conduct research in fundamental physics.

“It is an incredibly sensitive measurement of a very small quantity of this very rare isotope,” says David Nelson, atmospheric scientist at Aerodyne Research in the US. However, he points out that the technique benefits from the fact that carbon dioxide “has an extraordinarily strong infrared line strength. So while you could certainly use this technique with other molecules, you won’t get the same sensitivity”.

The research will be described in an upcoming issue of Physical Review Letters.

Diamonds entangled at room temperature

Two diamonds separated by about 15 cm have been put into a state of quantum entanglement. The experiment, carried out by physicists in the UK, was performed at room temperature and involved creating phonons (quantized vibrations) within the crystals. By showing that quantum entanglement can be achieved in two large and distant diamonds at room temperature, the research team has provided further evidence that a practical quantum computer could be within our grasp.

Quantum computers, which exploit purely quantum phenomena such as superposition and entanglement, should in principle be able to outperform classical computers at certain tasks. But building a practical quantum computer remains a challenge because the physical entities that store and transfer quantum bits (qubits) of information are easily destroyed by contact with the outside world.

Diamond is one material that shows great promise for quantum computing because it contains quantum systems that are well shielded from the environment. It has, for example, phonons that can interact with light from an external source but are in general unaffected by random thermal vibrations within the material itself.

In principle, quantum information could be stored in diamond by firing a laser pulse to create a phonon, with the information being retrieved sometime later by firing a second pulse that interacts with the phonon. Furthermore, entangled pulses of light could be used to entangle phonons in two different diamonds – something that could ultimately be useful in creating quantum computers.

Splitting photons

In the new work, Ian Walmsley and colleagues at the University of Oxford have used such a scheme to entangle two diamonds – each about 3 mm across. They begin by firing a “pump” laser pulse at a beamsplitter, which sends one half of the pulse to a diamond on the right and the other half to a diamond on the left. When an individual photon in the pulse encounters the beamsplitter, quantum mechanics dictates that it is put into a superposition of a photon that has gone left and a photon that has gone right.

When such a photon strikes a diamond, some of its energy can be absorbed to create a phonon – a high-frequency vibration of the atoms in the crystal. As phonons behave like particles that obey quantum mechanics, the diamonds are left in a superposition of the right-hand diamond having a phonon and the left-hand diamond having a phonon. In other words, the two diamonds “share” the same phonon, which is a hallmark of entanglement.

When a phonon is created, the original photon is re-emitted at a lower energy. This “red” photon is then detected in such a way that the physicists know that a phonon has been created in one of the diamonds, although which of the two is not revealed. In other words, this red photon “heralds” the entanglement of the two diamonds.

Scattered into the blue

Very shortly after the phonon is created, a second “probe” pulse is fired into the beamsplitter and the two resulting pulses arrive simultaneously at the both diamonds. If a phonon is present in one of the diamonds, one of the photons from the second pulse can absorb the phonon, thereby boosting its energy to create a “blue” photon. Because the two diamonds are entangled by a single phonon, this blue photon is in a superposition of either being emitted by the right-hand diamond or the left-hand diamond.

This superposition is confirmed by combining the blue light from both diamonds in a beamsplitter to create just one beam. This beam is then sent into a final beamsplitter, where it is again split into two beams. If a blue photon is indeed in a superposition of the left- and right-hand diamonds, it will always emerge from a specific output port of the beamsplitter. If the photon is not in a superposition, there is an equal probability that it will emerge from either port.

The team measured the output at both ports for blue photons that are heralded by a red photon – and found that the majority of the blue photons emerged from the expected port. This shows that the blue photons are indeed in a superposition as a result of the diamonds being entangled by a single phonon, according to Walmsley.

Picoseconds are enough

The entire process – from entanglement being created between the diamonds to detecting it using the probe pulse – lasts a mere 0.35 ps. Such a short timescale is needed because the phonons are only expected to survive for about 7 ps in diamond. Although such phonons could not be used to store quantum information for long periods of time, they could be used to perform very rapid quantum calculations – at least in principle.

Jeremy O’Brien of the University of Bristol in the UK calls the work “a tremendous development” that further extends the size of objects in which we can see quantum effects such as entanglement and superposition. Although practical applications of the work may not be immediately obvious, O’Brien believes it will capture physicists’ imagination of what could be possible in future quantum computers and information systems.

The research is described in Science 334 1253.

When will we see the first nuclear fusion reactor?

By James Dacey

hands smll.jpg

A Canadian company is planning to build a prototype fusion demonstrator that would be a fraction of the cost of a standard fusion reactor, as physicsworld.com editor Hamish Johnston reported today in this feature. Undoubtedly this is exciting news for those who have been following the development of nuclear fusion as a potential energy source, especially given these times of dwindling fossil-fuel supplies and environmental concerns linked with increasing carbon-dioxide emissions. But even if the prototype is a success, this is still a long way from the real deal: a fully functioning fusion reactor hooked up to a grid.

But we want to know your opinion on this issue. In this week’s Facebook poll, we want you to answer the following question:

When do you believe we will see the first working nuclear fusion reactor supplying electricity to a grid?

Within the next 30 years
Within 30–60 years
Within 60–90 years
Not until the Sun goes supernova

To cast your vote, please visit our Facebook page and please feel free to explain your answer by posting a comment.

Last year, this same question was put to David Ward from the Culham Centre for Fusion Energy in the UK during an interview with Physics World. Ward believes that, realistically, we will not see practical fusion until 2040–2050 at the earliest. IOP members can view this video interview here.

In last week’s Facebook poll we asked you what is your biggest peeve about popular-science writing. 53% of respondents told us that authors “blurring fact and speculation” is their biggest bane, while 24% of pollsters found it more annoying when writers give “bad or unclear explanations”. A further 13% said that they think authors “talking down to readers” is the biggest crime, and just 10% believe that the worst offenders are the writers who use “clichés and overblown language”.

As always, the poll also generated some interested discussion. One respondent, Kate Scaryboots Oliver, who is based in the UK, wrote that authors “not explaining methodology” was her particular pet hate. And you can almost picture the steam escaping from her ears when she added: “Also, if I have to read about space being like a rubber sheet one more time!”

Phil Barker, another respondent based in the UK, feels disgruntled by the fact that popular-science writers often gloss over the mathematics. “Look at how many people each year now graduate with science, engineering, computing, business and other degrees that require a reasonable amount of maths. I’m not saying that every science book or article should be entirely maths-based, just that there is an audience that can cope with and would appreciate something other than hand-waving.”

Thank you for all your responses and we look forward to hearing from you again in this week’s poll.

Copyright © 2026 by IOP Publishing Ltd and individual contributors