Skip to main content

Sticky fingers no more

100 supercon.jpg
Chocolate that doesn’t melt until as high as 50C could be a boon for chocoholics in the tropics. (Courtesy:iStockphoto.com/alle12)

By Tushna Commissariat

Look, there’s no metaphysics on Earth like chocolates” – Fernando Pessoa, Portuguese poet

It’s Easter again and shops in many countries are full of chocolate eggs and other gooey, chocolate-based treats. But why is it that certain tropical countries like Nigeria consume only small amounts of chocolate, despite producing most of the world’s cocoa? Indeed, nearly 70% of cocoa is grown in West Africa and the rest in Central and South America and Asia.

One of the main reasons is that the high tropical temperatures make chocolate lose its form while being transported within these areas. The chocolate can also undergo a “bloom formation” – a mouldy-looking white coating that forms on the surface resulting from an increase in temperature, which makes storing chocolate a problem.

But, once again, physics is providing a solution. Scientists have been looking at ways to create a “thermo-resistant” chocolate that holds its form and still tastes just right. And it looks like O Ogunwolu and C O Jayeola, food scientists at the Cocoa Research Institute of Nigeria have finally managed it, and just in time for Easter too. They found that adding varying amounts of cornstarch and gelatin to chocolate ensured that the chocolate melted at about 40–50 °C, instead of its normal melting point at about 25–33 °C. And the best bit is that, by all accounts, it still looks and tastes like normal chocolate!

The chocolate industry has been looking into ways of perfecting heat-resistant chocolate for a long time. For example, the US company Hershey’s developed a chocolate bar that was heat resistant and could be used as part of emergency rations for American troops during the Second World War. The down side was that, according to troop reports, the chocolate tasted “a little better than a boiled potato”. While Hershey’s did try other recipes and even managed to make a bar that melted only at 60 °C, reactions to the taste were mixed. So it is hoped that this new recipe will make chocolate more available to everyone, the world over, as it should be.

In other chocolate-related news, take a look at the slew of videos on YouTube that include researchers at the University of Nottingham conduct “Eggsperiments” with Cadbury’s Creme Eggs. My favourite one has chemists making quite a mess in their labs when they try to deconstruct the eggs.

Mapping orbits within black holes

The words “black hole” generally bring to mind destruction and an end to all ends. No-one – in fact or fiction – has considered the possibility of stable habitats existing within black holes. But that is precisely what physicist Vyacheslav I Dokuchaev of the Russian Academy of Sciences, Moscow, is suggesting in his new paper, “Is there life in black holes?”. Published in the Journal of Cosmology and Astroparticle Physics, Dokuchaev suggests that certain types of black hole contain stable orbits for photons within their interior that might even allow planets to survive.

Essentially, a black hole is a place where gravitational forces are so extreme that everything is sucked into them – including light. They have outer boundaries, known as event horizons, beyond which nothing can escape because matter starts moving at faster-than-light speeds. But charged, rotating black holes – known as “Kerr–Newman black holes” – exhibit an unexpected twist. They have not only an outer event horizon but also an inner horizon, called a “Cauchy” horizon. At this Cauchy horizon, because of the centrifugal forces involved, particles slow down back to the speed of light.

The final frontier

Since the 1960s, researchers have determined stable orbits for photons inside these charged, rotating black holes. In his new paper, Dokuchaev has looked at stable circular orbits as well as spherical, non-equatorial orbits for photons at the inner boundary. He concludes that there is no reason that larger bodies, such as planets, could not do the same. He even suggests that entire advanced civilizations could live inside this particular subset of black holes, on planets that orbit stably inside the hole – using the naked singularity as a source of energy. They would forever be shielded from the outside and not sucked into the singularity itself, he says.

In theory it should be possible to use the singularity as an energy source explains Andrew Hamilton, an astrophysicist at the University of Colorado in the US who has also calculated the orbits at the inner horizon inside these black holes. “A rotating black hole acts like a giant flywheel. A civilization can tap the rotational energy of the black hole by playing clever games of orbital billiards, something first pointed out by Roger Penrose,” he says.

However, Hamilton believes that, in reality, the situation is implausible. Inflation at the inner horizon would cause space–time to collapse, not to mention disturbances created by the high energy density at such a location, from massive amounts of matter falling into the black hole. On the whole, none of these circumstances would make for habitable conditions. Dokuchaev himself acknowledges these problems in his paper, but does not provide a solution.

Paradoxes and information losses

Even if a planet and then a civilization were to form inside these black holes, it would be almost impossible to discover them because all information is lost going into or coming out of a black hole. Although new theories state that information from the interior of black holes is encoded in the Hawking radiation emitted from them, this information could quite possibly be scrambled.

Arthur I Miller, a physicist and author of several popular-science books, believes that it is pointless to look at any possibility of life inside black holes, stable orbits notwithstanding. “It is, indeed, extreme science fiction to imagine the existence of worlds in them. Surely it would be a ‘crushing experience’ living inside a black hole?” he says.

So, while most scientists will agree that looking for life inside black holes is a futile venture, the sad truth is that we will never know if the real-estate market is missing out on a great new platform.

Resistance is futile

My involvement with high-temperature superconductors began in the autumn of 1986, when a student in my final-year course on condensed-matter physics at the University of Birmingham asked me what I thought about press reports concerning a new superconductor. According to the reports, two scientists working in Zurich, Switzerland – J Georg Bednorz and K Alex Müller – had discovered a material with a transition temperature, Tc, of 35 K – 50% higher than the previous highest value of 23 K, which had been achieved more than a decade earlier in Nb3Ge.

In those days, following this up required a walk to the university library to borrow a paper copy of the appropriate issue of the journal Zeitschrift für Physik B. I reported back to the students that I was not convinced by the data, since the lowest resistivity that Bednorz and Müller (referred to hereafter as “B&M”) had observed might just be comparable with that of copper, rather than zero. In any case, the material only achieved zero resistivity at ~10 K, even though the drop began at the much higher temperature of 35 K (figure 1).

In addition, the authors had not, at the time they submitted the paper in April 1986, established the composition or crystal structure of the compound they believed to be superconducting. All they knew was that their sample was a mixture of different phases containing barium (Ba), lanthanum (La), copper (Cu) and oxygen (O). They also lacked the equipment to test whether the sample expelled a magnetic field, which is a more fundamental property of superconductors than zero resistance, and is termed the Meissner effect. No wonder B&M had carefully titled their paper “Possible high Tc superconductivity in the Ba–La–Cu–O system” (my italics).

My doubt, and that of many physicists, was caused by two things. One was a prediction made in 1968 by the well-respected theorist Bill McMillan, who proposed that there was a natural upper limit to the possible Tc for superconductivity – and that we were probably close to it. The other was the publication in 1969 of Superconductivity, a two-volume compendium of articles by all the leading experts in the field. As one of them remarked, this book would represent “the last nail in the coffin of superconductivity”, and so it seemed: many people left the subject after that, feeling that everything important had already been done in the 58 years since its discovery.

In defying this conventional wisdom, B&M based their approach on the conviction that superconductivity in conducting oxides had been insufficiently exploited. They hypothesized that such materials might harbour a stronger electron–lattice interaction, which would raise the Tc according to the theory of superconductivity put forward by John Bardeen, Leon Cooper and Robert Schrieffer (BCS) in 1957 (see “The BCS theory of superconductivity” below). For two years B&M worked without success on oxides that contained nickel and other elements. Then they turned to oxides containing copper – cuprates – and the results were as the Zeitschrift für Physik B paper indicated: a tantalizing drop in resistivity.

What soon followed was a worldwide rush to build on B&M’s discovery. As materials with still higher Tc were found, people began to feel that the sky was the limit. Physicists found a new respect for oxide chemists as every conceivable technique was used first to measure the properties of these new compounds, and then to seek applications for them. The result was a blizzard of papers. Yet even after an effort measured in many tens of thousands of working years, practical applications remain technically demanding, we still do not properly understand high-Tc materials and the mechanism of their superconductivity remains controversial.

The ball starts rolling

Although I was initially sceptical, others were more accepting of B&M’s results. By late 1986 Paul Chu’s group at the University of Houston, US, and Shoji Tanaka’s group at Tokyo University in Japan had confirmed high-Tc superconductivity in their own Ba–La–Cu–O samples, and B&M had observed the Meissner effect. Things began to move fast: Chu found that by subjecting samples to about 10,000 atmospheres of pressure, he could boost the Tc up to ~50 K, so he also tried “chemical pressure” – replacing the La with the smaller ion yttrium (Y). In early 1987 he and his collaborators discovered superconductivity in a mixed-phase Y–Ba–Cu–O sample at an unprecedented 93 K – well above the psychological barrier of 77 K, the boiling point of liquid nitrogen. The publication of this result at the beginning of March 1987 was preceded by press announcements, and suddenly a bandwagon was rolling: no longer did superconductivity need liquid helium at 4.2 K or liquid hydrogen at 20 K, but instead could be achieved with a coolant that costs less than half the price of milk.

Chu’s new superconducting compound had a rather different structure and composition than the one that B&M had discovered, and the race was on to understand it. Several laboratories in the US, the Netherlands, China and Japan established almost simultaneously that it had the chemical formula YBa2Cu3O7–d, where the subscript 7–d indicates a varying content of oxygen. Very soon afterwards, its exact crystal structure was determined, and physicists rapidly learned the word “perovskite” to describe it (see “The amazing perovskite family” below). They also adopted two widely used abbreviations, YBCO and 123 (a reference to the ratios of Y, Ba and Cu atoms) for its unwieldy chemical formula.

The competition was intense. When the Dutch researchers learned from a press announcement that Chu’s new material was green, they deduced that the new element he had introduced was yttrium, which can give rise to an insulating green impurity with the chemical formula Y2BaCuO5. They managed to isolate the pure 123 material, which is black in colour, and the European journal Physica got their results into print first. However, a group from Bell Labs was the first to submit a paper, which was published soon afterwards in the US journal Physical Review Letters. This race illustrates an important point: although scientists may high-mindedly and correctly state that their aim and delight is to discover the workings of nature, the desire to be first is often a very strong additional motivation. This is not necessarily for self-advancement, but for the buzz of feeling (perhaps incorrectly in this case) “I’m the only person in the world who knows this!”.

“The Woodstock of physics”

For high-Tc superconductivity, the buzz reached fever pitch at the American Physical Society’s annual “March Meeting”, which in 1987 was held in New York. The week of the March Meeting features about 30 gruelling parallel sessions from dawn till after dusk, where a great many condensed-matter physicists present their latest results, fill postdoc positions, gossip and network. The programme is normally fixed months in advance, but an exception had to be made that year and a “post-deadline” session was rapidly organized for the Wednesday evening in the ballroom of the Hilton Hotel. This space was designed to hold 1100 people, but in the event it was packed with nearly twice that number, and many others observed the proceedings on video monitors outside.

Müller and four other leading researchers gave talks greeted with huge enthusiasm, followed by more than 50 five-minute contributions, going on into the small hours. This meeting gained the full attention of the press and was dubbed “the Woodstock of physics” in recognition of the euphoria it generated – an echo of the famous rock concert held in upstate New York in 1969. The fact that so many research groups were able to produce results in such a short time indicated that the B&M and Chu discoveries were “democratic”, meaning that anyone with access to a small furnace (or even a pottery kiln) and a reasonable understanding of solid-state chemistry could confirm them.

With so many people contributing, the number of papers on superconductivity shot up to nearly 10,000 in 1987 alone. Much information was transmitted informally: it was not unusual to see a scientific paper with “New York Times, 16 February 1987″ among the references cited. The B&M paper that began it all has been cited more than 8000 times and is among the top 10 most cited papers of the last 30 years. It is noteworthy that nearly 10% of these citations include misprints, which may be because of the widespread circulation of faxed photocopies of faxes. One particular misprint, an incorrect page number, occurs more than 250 times, continuing to the present century. We can trace this particular “mutant” back to its source: a very early and much-cited paper by a prominent high-Tc theorist. Many authors have clearly copied some of their citations from the list at the end of this paper, rather than going back to the originals. There have also been numerous sightings of “unidentified superconducting objects” (USOs), or claims of extremely high transition temperatures that could not be reproduced. One suspects that some of these may have arisen when a voltage lead became badly connected as a sample was cooled; of course, this would cause the voltage measured across a current-carrying sample to drop to zero.

Meanwhile, back in Birmingham, Chu’s paper was enough to persuade us that high-Tc superconductivity was real. Within the next few weeks, we made our own superconducting sample at the second attempt, and then hurried to measure the flux quantum – the basic unit of magnetic field that can thread a superconducting ring. According to the BCS theory of superconductivity, this flux quantum should have the value h/2e, with the factor 2 representing the pairing of conduction electrons in the superconductor. This was indeed the value we found (figure 2). We were amused that the accompanying picture of our apparatus on the front cover of Nature included the piece of Blu-Tack we used to hold parts of it together – and pleased that when B&M were awarded the 1987 Nobel Prize for Physics (the shortest gap ever between discovery and award), our results were reproduced in Müller’s Nobel lecture.

Unfinished business

In retrospect, however, our h/2e measurement may have made a negative contribution to the subject, since it could be taken to imply that high-Tc superconductivity is “conventional” (i.e. explained by standard BCS theory), which it certainly is not. Although B&M’s choice of compounds was influenced by BCS theory, most (but not all) theorists today would say that the interaction that led them to pick La–Ba–Cu–O is not the dominant mechanism in high-Tc superconductivity. Some of the evidence supporting this conclusion came from several important experiments performed in around 1993, which together showed that the paired superconducting electrons have l = 2 units of relative angular momentum. The resulting wavefunction has a four-leaf-clover shape, like one of the d-electron states in an atom, so the pairing is said to be “d-wave”. In such l = 2 pairs, “centrifugal force” tends to keep the constituent electrons apart, so this state is favoured if there is a short-distance repulsion between them (which is certainly the case in cuprates). This kind of pairing is also favoured by an anisotropic interaction expected at larger distances, which can take advantage of the clover-leaf wavefunction. In contrast, the original “s-wave” or l = 0 pairing described in BCS theory would be expected if there is a short-range isotropic attraction arising from the electron–lattice interaction.

These considerations strongly indicate that the electron–lattice interaction (which in any case appears to be too weak) is not the cause of the high Tc. As for the actual cause, opinion tends towards some form of magnetic attraction playing a role, but agreement on the precise mechanism has proved elusive. This is mainly because the drop in electron energy on entering the superconducting state is less than 0.1% of the total energy (which is about 1 eV), making it extremely difficult to isolate this change.

On the experimental side, the maximum Tc has been obstinately stuck at about halfway to room temperature since the early 1990s. There have, however, been a number of interesting technical developments. One is the discovery of superconductivity at 39 K in magnesium diboride (MgB2), which was made by Jun Akimitsu in 2001. This compound had been available from chemical suppliers for many years, and it is interesting to speculate how history would have been different if its superconductivity had been discovered earlier. It is now thought that MgB2 is the last of the BCS superconductors, and no attempts to modify it to increase the Tc further have been successful. Despite possible applications of this material, it seems to represent a dead end.

In the same period, other interesting families of superconductors have also been discovered, including the organics and the alkali-metal-doped buckyball series. None, however, have raised as much excitement as the development in 2008 (by Hideo Hosono’s group at Tokyo University) of an iron-based superconductor with Tc above 40 K. Like the cuprate superconductors before them, these materials also have layered structures, typically with iron atoms sandwiched between arsenic layers, and have to be doped to remove antiferromagnetism. However, the electrons in these materials are less strongly interacting than they are in the cuprates, and because of this, theorists believe that they will be an easier nut to crack. A widely accepted model posits that the electron pairing mainly results from a repulsive interaction between two different groups of carriers, rather than attraction between carriers within a group. Even though the Tc in these “iron pnictide” superconductors has so far only reached about 55 K, the discovery of these materials is a most interesting development because it indicates that we have not yet scraped the bottom of the barrel for new mechanisms and materials for superconductivity, and that research on high-Tc superconductors is still a developing field.

A frictionless future?

So what are the prospects for room-temperature superconductivity? One important thing to remember is that even supposing we discover a material with Tc ~ 300 K, it would still not be possible to make snooker tables with levitating frictionless balls, never mind the levitating boulders in the film Avatar. Probably 500 K would be needed, because we observe and expect that as Tc gets higher, the electron pairs become smaller. This means that thermal fluctuations become more important, because they occur in a smaller volume and can more easily lead to a loss of the phase coherence essential to superconductivity. This effect, particularly in high magnetic fields, is already important in current high-Tc materials and has led to a huge improvement in our understanding of how lines of magnetic flux “freeze” in position or “melt” and move, which they usually do near to Tc, and give rise to resistive dissipation.

Another limitation, at least for the cuprates, is the difficulty of passing large supercurrents from one crystal to the next in a polycrystalline material. This partly arises from the fact that in such materials, the supercurrents only flow well in the copper-oxide planes. In addition, the coupling between the d-wave pairs in two adjacent crystals is very weak unless the crystals are closely aligned so that the lobes of their wavefunctions overlap. Furthermore, the pairs are small, so that even the narrow boundaries between crystal grains present a barrier to their progress. None of these problems arise in low-Tc materials, which have relatively large isotropic pairs.

For high-Tc materials, the solution, developed in recent years, is to form a multilayered flexible tape in which one layer is an essentially continuous single crystal of 123 (figure 3). Such tapes are, however, expensive because of the multiple hi-tech processes involved and because, unsurprisingly, ceramic oxides cannot be wound around sharp corners. It seems that even in existing high-Tc materials, nature gave with one hand, but took away with the other, by making the materials extremely difficult to use in practical applications.

Nevertheless, some high-Tc applications do exist or are close to market. Superconducting power line “demonstrators” are undergoing tests in the US and Russia, and new cables have also been developed that can carry lossless AC currents of 2000 A at 77 K. Such cables also have much higher current densities than conventional materials when they are used at 4.2 K in high-field magnets. Superconducting pick-up coils already improve the performance of MRI scanners, and superconducting filters are finding applications in mobile-phone base stations and radio astronomy.

In addition to the applications, there are several other positive things that have arisen from the discovery of high-Tc superconductivity, including huge developments in techniques for the microscopic investigation of materials. For example, angle-resolved photo-electron spectroscopy (ARPES) has allowed us to “see” the energies of occupied electron states in ever-finer detail, while neutron scattering is the ideal tool with which to reveal the magnetic properties of copper ions. The advent of high-Tc superconductors has also revealed that the theoretical model of weakly interacting electrons, which works so well in simple metals, needs to be extended. In cuprates and many other materials investigated in the last quarter of a century, we have found that the electrons cannot be treated as a gas of almost independent particles.

The result has been new theoretical approaches and also new “emergent” phenomena that cannot be predicted from first principles, with unconventional superconductivity being just one example. Other products of this research programme include the fractional quantum Hall effect, in which entities made of electrons have a fractional charge; “heavy fermion” metals, where the electrons are effectively 100 times heavier than normal; and “non-Fermi” liquids in which electrons do not behave like independent particles. So is superconductivity growing old after 100 years? In a numerical sense, perhaps – but quantum mechanics is even older if we measure from Planck’s first introduction of his famous constant, yet both are continuing to spring new surprises (and are strongly linked together). Long may this continue!

The BCS theory of superconductivity

Although superconductivity was observed for the first time in 1911, there was no microscopic theory of the phenomenon until 1957, when John Bardeen, Leon Cooper and Robert Schrieffer made a breakthrough. Their “BCS” theory – which describes low-temperature superconductivity, though it requires modification to describe high-Tc – has several components. One is the idea that electrons can be paired up by a weak interaction, a phenomenon now known as Cooper pairing. Another is that the “glue” that holds electron pairs together, despite their Coulomb repulsion, stems from the interaction of electrons with the crystal lattice – as described by Bardeen and another physicist, David Pines, in 1955. A simple way to think of this interaction is that an electron attracts the positively charged lattice and slightly deforms it, thus making a potential well for another electron. This is rather like two sleepers on a soft mattress, who each roll into the depression created by the other. It is this deforming response that caused Bill McMillan to propose in 1968 that there should be a maximum possible Tc: if the lectron–lattice interaction is too strong, the crystal may deform to a new structure instead of becoming superconducting.

The third component of BCS theory is the idea that all the pairs of electrons are condensed into the same quantum state as each other – like the photons in a coherent laser beam, or the atoms in a Bose–Einstein condensate. This is possible even though individual electrons are fermions and cannot exist in the same state as each other, as described by the Pauli exclusion principle. This is because pairs of electrons behave somewhat like bosons, to which the exclusion principle does not apply. The wavefunction incorporating this idea was worked out by Schrieffer (then a graduate student) while he was sitting in a New York subway car.

Breaking up one of these electron pairs requires a minimum amount of energy, Δ, per electron. At non-zero temperatures, pairs are constantly being broken up by thermal excitations. The pairs then re-form, but when they do so they can only rejoin the state occupied by the unbroken pairs. Unless the temperature is very close to Tc (or, of course, above it) there is always a macroscopic number of unbroken pairs, and so thermal excitations do not change the quantum state of the condensate. It is this stability that leads to non-decaying supercurrents and to superconductivity. Below Tc, the chances of all pairs getting broken at the same time are about as low as the chances that a lump of solid will jump in the air because all the atoms inside it are, coincidentally, vibrating in the same direction. In this way, the BCS theory successfully accounted for the behaviour of “conventional” low-temperature superconductors such as mercury and tin.

It was soon realized that BCS theory can be generalized. For instance, the pairs may be held together by a different interaction than that between electrons and a lattice, and two fermions in a pair may have a mutual angular momentum, so that their wavefunction varies with direction – unlike the spherically symmetric, zero-angular-momentum pairs considered by BCS. Materials with such pairings would be described as “unconventional superconductors”. However, there is one aspect of superconductivity theory that has remained unchanged since BCS: we do not know of any fermion superconductor without pairs of some kind.

The amazing perovskite family

Perovskites are crystals that have long been familiar to inorganic chemists and mineralogists in contexts other than superconductivity. Perovskite materials containing titanium and zirconium, for example, are used as ultrasonic transducers, while others containing manganese exhibit very strong magnetic-field effects on their electrical resistance (“colossal magnetoresistance”). One of the simplest perovskites, strontium titanate (SrTiO3), is shown in the top image (right). In this material, Ti4+ ions (blue) are separated by O2– ions (red) at the corners of an octahedron, with Sr2+ ions (green) filling the gaps and balancing the charge.

Bednorz and Müller (B&M) chose to investigate perovskite-type oxides (a few of which are conducting) because of a phenomenon called the Jahn–Teller effect, which they believed might provide an increased interaction between the electrons and the crystal lattice. In 1937 Hermann Arthur Jahn and Edward Teller predicted that if there is a degenerate partially occupied electron state in a symmetrical environment, then the surroundings (in this case the octahedron of oxygen ions around copper) would spontaneously distort to remove the degeneracy and lower the energy. However, most recent work indicates that the electron–lattice interaction is not the main driver of superconductivity in cuprates – in which case the Jahn–Teller theory was only useful because it led B&M towards these materials!

The most important structural feature of the cuprate perovskites, as far as superconductivity is concerned, is the existence of copper-oxide layers, where copper ions in a square array are separated by oxygen ions. These layers are the location of the superconducting carriers, and they must be created by varying the content of oxygen or one of the other constituents – “doping” the material. We can see how this works most simply in B&M’s original compound, which was La2CuO4 doped with Ba to give La2–xBaxCuO4 (x ~ 0.15 gives the highest Tc). In ionic compounds, lanthanum forms La3+ ions, so in La2CuO4 the ionic charges all balance if the copper and oxygen ions are in their usual Cu2+ (as in the familiar copper sulphate, CuSO4) and O2– states. La2CuO4 is insulating even though each Cu2+ ion has an unpaired electron, as these electrons do not contribute to electrical conductivity because of their strong mutual repulsion. Instead, they are localized, one to each copper site, and their spins line up antiparallel in an antiferromagnetic state. If barium is incorporated, it forms Ba2+ ions, so that the copper and oxygen ions can no longer have their usual charges, thus the material becomes “hole-doped”, the antiferromagnetic ordering is destroyed and the material becomes both a conductor and a superconductor. YBa2Cu3O7–d or “YBCO” (bottom right) behaves similarly, except that there are two types of copper ions, inside and outside the CuO2 planes, and the doping is carried out by varying the oxygen content. This material contains Y3+ (yellow) and Ba2+ (purple) ions, copper (blue) and oxygen (red) ions. When d ~0.03, the hole-doping gives a maximum Tc; when d is increased above ~0.7, YBCO becomes insulating and antiferromagnetic.

Taking the multiverse on faith

The Grand Design begins with a series of questions: “How can we understand the world in which we find ourselves?”, “How does the universe behave?”, “What is the nature of reality?”, “Where did all this come from?” and “Did the universe need a creator?”. As the book’s authors, Stephen Hawking and Leonard Mlodinow, point out, “almost all of us worry about [these questions] some of the time”, and over the millennia, philosophers have worried about them a great deal. Yet after opening their book with an entertaining history of philosophers’ takes on these fundamental questions, Hawking and Mlodinow go on to state provocatively that philosophy is dead: since philosophers have not kept up with the advances of modern science, it is now scientists who must address these large questions.

Much of the rest of the book is therefore devoted to a description of the authors’ own philosophy, an interpretation of the world that they call “model-dependent realism”. They argue that different models of the universe can be constructed using mathematics and tested experimentally, but that no one model can be claimed as a true description of reality. This idea is not new; indeed, the Irish philosopher and bishop George Berkeley hinted at it in the 18th century. However, Hawking and Mlodinow take Berkeley’s idea to extremes by claiming that since many models of nature can exist that describe the experimental data equally well, such models are therefore equally valid.

It is important to the argument of the book – which leads eventually to more exotic models such as M-theory and the multiverse – that readers accept the premise of model-dependent realism. However, the history of science shows that the premise of one model being as good and useful as another is not always correct. Paradigms shift because a new model not only fits the current observational data as well as (or better than) an older model, but also makes predictions that fit new data that cannot be explained by the older model. Hawking and Mlodinow’s assertion that “there is no picture- or theory-independent concept of reality” thus flies in the face of one of the basic tenets of the scientific method.

Consider the Ptolemaic model of the solar system, in which the planets move in circular orbits around the Earth, and the heliocentric model put forward by Copernicus. The authors suggest that the two models can be made to fit the astronomical data equally well, but that the heliocentric model is a simpler and more convenient one to use. Yet this does not make them equivalent. New data differentiated them: Galileo’s observation of the phases of Venus, through his telescope, cannot easily be explained in Ptolemy’s Earth-centred system. Similarly, Einstein’s theory of gravity superseded Newton’s laws of gravitation when its equations correctly described Mercury’s anomalous orbit. One theory, one perception of reality, is not just as good as another, and this can be shown empirically: Einstein’s gravity is even used to make corrections to Newton’s in the Global Positioning System.

It is true, however, that the situation in quantum mechanics has not yet been resolved. Several different models, such as the “many worlds” interpretation of Hugh Everett III, the Copenhagen interpretation and certain Bohmian hidden-variable models, all agree with quantum-mechanical experiments, and as yet none of the interpretations has produced a prediction that would experimentally differentiate them. Based on the history of science, however, we have no reason to assume that in the future there will not be a decisive experiment that will support one model over the others.

A second premise that the reader is expected to accept as The Grand Design moves along is that we can, and should, apply quantum physics to the macroscopic world. To support this premise, Hawking and Mlodinow cite Feynman’s probabilistic interpretation of quantum mechanics, which is based on his “sum over histories” of particles. Basic to this interpretation is the idea that a particle can take every possible path connecting two points. Extrapolating hugely, the authors then apply Feynman’s formulation of quantum mechanics to the whole universe: they announce that the universe does not have a single history, but every possible history, each one with its own probability.

This statement effectively wipes out the widely accepted classical model of the large-scale structure of the universe, beginning with the Big Bang. It also leads to the idea that there are many possible, causally disconnected universes, each with its own different physical laws, and we occupy a special one that is compatible with our existence and our ability to observe it. Thus, in one fell swoop the authors embrace both the “multiverse” and the “anthropic principle” – two controversial notions that are more philosophic than scientific, and likely can never be verified or falsified.

Another key component of The Grand Design is the quest for the so-called theory of everything. When Hawking became Lucasian Professor of Mathematics at Cambridge University – the chair held by, among others, Newton and Paul Dirac – he gave an inaugural speech claiming that we were close to “the end of physics”. Within 20 years, he said, physicists would succeed in unifying the forces of nature, and unifying general relativity with quantum mechanics. He proposed that this would be achieved through supergravity and its relation, string theory. Only technical problems, he stated, meant that we were not yet able to prove that supergravity solved the problem of how to make quantum-gravity calculations finite.

But that was in 1979, and Hawking’s vision of that theory of everything is still in limbo. Underlying his favoured “supergravity” model is the postulate that, in addition to the known observable elementary particles in particle physics, there exist superpartners, which differ from the known particles by a one-half unit of quantum spin. None of these particles has been detected to date in high-energy accelerator experiments, including those recently carried out at the Large Hadron Collider at CERN. Yet despite this, Hawking has not given up on a theory of everything – or has he?

After an entertaining description of the Standard Model of particle physics and various attempts at unification, Hawking and his co-author conclude that there is indeed a true theory of everything, and its name is “M-theory”. Of course, no-one knows what the “M” in M-theory stands for, although “master”, “miracle” and “mystery” have been suggested. Nor can anyone convincingly describe M-theory, except that it supposedly exists in 11 dimensions and contains string theory in 10 dimensions. A problem from the outset with this incomplete theory is that one must hide, or compactify, the extra seven dimensions in order to yield the three spatial dimensions and one time dimension that we inhabit. There is a possibly infinite number of ways to perform this technical feat. As a result of this, there is a “landscape” of possible solutions to M-theory, 10500 by one count, which for all practical purposes also approaches infinity.

That near-infinity of solutions might be seen by some as a flaw in M-theory, but Hawking and Mlodinow seize upon this controversial aspect of it to claim that “the physicist’s traditional expectation of a single theory of nature is untenable, and there exists no single formulation”. Even more dramatically, they state that “the original hope of physicists to produce a single theory explaining the apparent laws of our universe as the unique possible consequence of a few simple assumptions has to be abandoned”. Still, the old dream persists, albeit in a modified form. The difference, as Hawking and Mlodinow assert pointedly, is that M-theory is not one theory, but a network of many theories.

Apparently unconcerned that theorists have not yet succeeded in explaining M-theory, and that it has not been possible to test it, the authors conclude by declaring that they have formulated a cosmology based on it and on Hawking’s idea that the early universe is a 4D sphere without a beginning or an end (the “no-boundary theory”). This cosmology is the “grand design” of the title, and one of its predictions is that gravity causes the universe to create itself spontaneously from nothing. This somehow explains why we exist. At this point, Hawking and Mlodinow venture into religious controversy, proclaiming that “it is not necessary to invoke God to light the blue touch paper and set the universe going”.

Near the end of the book, the authors claim that for a theory of quantum gravity to predict finite quantities, it must possess supersymmetry between the forces and matter. They go on to say that since M-theory is the most general supersymmetric theory of gravity, it is the only candidate for a complete theory of the universe. Since there is no other consistent model, then we must be part of the universe described by M-theory. Early in the book, the authors state that an acceptable model of nature must agree with experimental data and make predictions that can be tested. However, none of the claims about their “grand design” – or M-theory or the multiverse – fulfils these demands. This makes the final claim of the book – “If the theory is confirmed by observation, it will be the successful conclusion of a search going back 3000 years” – mere hyperbole. With The Grand Design, Hawking has again, as in his inaugural Lucasian Professor speech, made excessive claims for the future of physics, which as before remain to be substantiated.

Icons of progress

100 supercon.jpg
Celebrating 100 years of superconductivity (courtesy: IBM)

By Michael Banks

From the 100th anniversary of Marie Curie’s Nobel Prize for Chemistry to 100 years since Ernest Rutherford proposed his model of the atom, 2011 marks a whole host of centenaries.

This year is also the 100th anniversary of the discovery of superconductivity — the phenomenon where the electrical resistance of a materials drops to exactly zero — by experimental physicist Heike Kamerlingh Onnes in 1911.

But that is not superconductivity’s only anniversary. This month represents 25 years since the discovery of high-temperature superconductivity by physicists Georg Bednorz and Alex Müller, who were then both working at the IBM Research Laboratory in Zurich.

In 1986 Bednorz and Müller discovered that the electrical resistance of a material made from lanthanum, barium, copper and oxygen (LaBaCuO) — known as a cuprate — fell abruptly to zero when cooled below a temperature of 35 K. Their discovery opened the door to potentially higher superconducting temperatures earning the duo the 1987 Nobel Prize for Physics.

To celebrate that feat, high-temperature superconductors have made it into IBM’s “100 icons of progress” — a list of 100 breakthroughs that have been carried out at IBM’s research centres around the globe. The list is to celebrate, yep you guessed it, IBM’s centenary this year, and has already featured 43 “icons” such as the floppy disk and the scanning tunnelling microscope.

Superconductivity was added to the list yesterday to mark 25 years since Bednorz and Müller submitted their paper to Zeitschrift für Physik (the paper was published in June 1986) and IBM will be adding more icons throughout the year.

Yet if you have still not had enough of all things superconductivity then make sure you celebrate the centenary by enjoying a free download of the April 2011 issue of Physics World, which is packed full of articles on the subject.

You may also want to watch a video interview with Laura H Greene of the University of Illinois Urbana-Champaign about what comes next after the cuprates.

Cooling with heat

A quantum system can be cooled with a blast of hot incoherent light. That’s the surprising conclusion of theoretical physicists in Germany who have shown that the rate of cooling can sometimes be increased by putting a system in contact with a hot entity. The scheme – which has not been tested in the lab – could offer a simple way of cooling quantum devices.

Since the 1980s physicists have been cooling gases of atoms using coherent laser light. This method works by having atoms absorbing and emitting photons such that the atoms gradually loose momentum. This technique only works if the light is coherent – if it isn’t coherent the light simply heats up the gas.

But now Jens Eisert and Andrea Mari of the Freie Universitaet Berlin have come up with a way of using incoherent light to cool a quantum system. Their system is a mechanical quantum oscillator that is coupled to two optical modes – however Eisert stresses that it can be applied to a wide range of three mode quantum systems.

Hot and cold modes

The process begins with the mechanical oscillator in a high-energy or hot state. One of the optical modes is cold, which means that energy can potentially flow from the oscillator to the cold mode – cooling the oscillator.

The second optical mode is hot, meaning that it contains a large number of incoherent photons and is subject to thermal fluctuations. According to Eisert and Mari’s calculations, this hot mode has two effects on the temperature of the mechanical oscillator. One effect is obvious; the hot mode heats the oscillator. The second unexpected effect is that fluctuations in the hot mode increase the rate at which energy is transferred from the oscillator to the cold mode. The key to a practical application of the technique is to ensure that the latter effect is dominant.

Eisert says that the system is similar to a transistor, whereby the application of heat at the hot optical mode results in a proportional increase in the flow of heat from the mechanical oscillator to the cold mode.

According to Eisert several experimental groups are now working on realizing the system in the lab. Possible applications of the effect include cooling quantum devices such as atomic clocks or tiny mechanical resonators using incoherent light from relatively inexpensive LEDs.

The calculations are described in arXiv: 1104.0260

Ultracold atoms simulate magnetic interactions

Magnetic interactions usually seen deep within solid materials have been simulated for the first time using ultracold atoms. By “tilting” a 1D chain of atoms, physicists in the US have created a structure that resembles a quantum antiferromagnet. The researchers say that the technique could be extended to 2D lattices of atoms and could provide important insights into the nature of magnetism.

For the past decade or so, physicists have been using lasers and magnetic fields to trap collections of atoms and cool them to temperatures near absolute zero. By carefully manipulating the laser light and magnetic fields, researchers can control the interactions between atoms – allowing them to simulate interactions that occur between electrons or ions in solid materials. But unlike solids, the strength of these interactions can be easily adjusted, allowing physicists to test theories of condensed-matter physics in these “quantum simulators”.

In this latest simulation, Markus Greiner and colleagues at Harvard University begin with a cloud of rubidium-87 atoms that they cool to less than 100 pK. Then they fire a laser at a holographic mask to create a 1D optical lattice, which is a string of about 100 alternating bright and dark regions repeating every 680 nm. The atoms arrange themselves at regular distances with each dark region – or “well” – containing exactly one atom. Although an atom could tunnel through the light region to join its neighbour, it is discouraged from doing so by the high energy cost of two atoms occupying one well. This state of matter is known as a Mott insulator and has already been studied extensively using ultracold atoms.

Paramagnetic state

An atom occupying its original well can be thought of as a magnetic moment pointing “up” and an atom that has tunnelled to the neighbouring well as a moment pointing “down”. A Mott insulator, therefore, is an analogue of a paramagnetic state in which an external magnetic field forces all of the magnetic moments in a material to point in the same up direction.

The team then “tilt” the lattice by applying a magnetic field gradient to the atoms, which encourages the atoms to move in one direction along the string. If the tilt energy is greater than the interaction energy, then every second atom tunnels into its neighbour’s well – resulting in alternating wells with zero and two atoms, respectively. According to Greiner, this can be mapped directly onto a 1D antiferromagnet – an arrangement of spins that point up and down in an alternating pattern.

By adjusting the relative strength of the tilt and interaction energies, the team mapped out the transition from the paramagnetic to antiferromagnetic phases. This occurs via a quantum phase transition when the tilt and tunnelling energies are equal. At the transition point, quantum fluctuations in the positions of the atoms grow rapidly, driving the system from one configuration to another. Such quantum fluctuations correspond to individual spins simultaneously pointing in different directions – a quantum superposition. This situation could never occur in the world of classical physics, but gives rise to fascinating properties of quantum magnets.

Ising on the cake

The effective interaction between up and down moments depends on whether a well’s two nearest neighbours are occupied. As a result Greiner and team have simulated the “quantum Ising model” – which provides a simple yet useful explanation of magnetism.

Greiner told physicsworld.com that the team’s next step is to look at excitations in the system. “We can flip a single moment and see what happens,” he said. Gaining a better understanding of such excitations could provide important insights into real magnetic materials.

The group also plans to simulate antiferromagnetism in 2D by placing ultracold atoms in a 2D optical lattice. By studying a triangular lattice, for example, the team could gain insight into “frustrated” magnetic systems, in which a moment cannot minimize its energy with respect to all of its nearest neighbours.

The research is reported in Nature 10.1038/nature09994.

Mystery of the riderless bike thickens

By James Dacey

The riderless bike is a fairly well known quirk of mechanics. As the name suggests, it refers to the fact that regular bicycles can keep going by themselves for long distances without toppling over. Indeed, the surreal image of a riderless bike inspired this brilliant scene in Jour de Fête, a black and white French comedy from 1949.

But a new bike created by researchers in the US is and the Netherlands has cast doubt on our understanding of what causes this effect.

The phenomenon of bicycle self stability was first described analytically in 1897 by French mathematician Emmanuel Carvallo, and since that time many other scientists have contributed their two pennies worth.

While it quickly became clear that the mechanics behind the effect are not as simple as one might think, most researchers agree that the stability is due to two features of mechanics. Firstly, there is gyroscopic motion, which causes the front wheel to correct itself like a spinning top. Then secondly, there is the “trail” or “caster” effect, which also explains why the front wheel of a shopping trolley automatically turns to follow the pivot.

A team including Andy Ruina at Cornell University has created a bike that self-balances without relying on these forces – the first of its kind. The researchers published their findings in this week’s edition of Science.

Ruina told the Science podcast that the balancing must still be related to a mechanical effect that couples the forces involved in bike-leaning to its steering. While the bike currently looks more like a child’s scooter, Ruina sees no reason why it could not be rearranged to appear more like a familiar motorcycle or bike.

To see the bike in action, follow this link.

Brightest bubble bursting yet

Physicists in the US claim to have broken the record for the brightness of light generated by “sonoluminescence”, the imploding of a bubble when it is blasted with sound waves. With a peak power of 100 W, the light is 100 times as bright as seen in previous sonoluminescence experiments, and may help scientists understand how the strange phenomenon works.

Sonoluminesence was discovered in the first half of the 20th century but it was only in the 1990s that physicists began to investigate the phenomenon seriously. Although no-one is sure how it works, the basic idea is that sound waves are fed into a vessel containing one or more bubbles inside a liquid. The sound causes the bubbles to expand momentarily before water pressure takes over, imploding the bubbles in bursts of heat and light.

In many sonoluminescence experiments, the power of the generated light flash is just a few milliwatts. However, in 2004 Alan Walton and other physicists at the University of Cambridge subjected bubbles in a liquid column to vertical vibrations and produced flashes of light that peaked at 1 W. But now, a group led by Seth Putterman at the University of California, Los Angeles, has devised a new variation on the method to break that record and generate light that is 100 times as bright.

Shocking technique

In the California group’s experiment, the researchers fill a steel cylinder with phosphoric acid and position it almost a centimetre above a steel base. Using a needle at the bottom they inject a 1 mm-sized bubble of xenon into the tube and let it float towards the top. When the bubble reaches a height of 11 cm, the researchers let the cylinder drop and the resultant shock collapses the bubble in a brief flash. Analysis shows that this flash has a peak intensity of 100 W and a temperature of 10,200 K.

According to Putterman, this “one shot” method is more controllable than previous methods and should therefore offer a way of producing even higher-temperature and power-bubble collapses. It might also be a route to understanding sonoluminescence.

“Why does a diffuse sound field focus its energy density by such large factors to create sonoluminescence? In some set-ups this factor can reach one trillion,” asks Putterman, who believes that nonlinear processes are responsible. “We want to learn about these nonlinear processes and see if they can be generalized to other cases.”

No signs of fusion

In the past, sonoluminescence has proved a controversial subject. In 2002 Rusi Taleyarkhan, then at the Oak Ridge National Laboratory in Tennessee, US, and colleagues claimed to find evidence for nuclear fusion occurring alongside sonoluminescence. Although a few other research groups have since made similar claims, most nuclear scientists believe them to be misguided. Taleyarkhan has since moved to Purdue University, where in 2008 he was reprimanded by the university for “research misconduct” related to a paper on fusion.

Yet Putterman does not rule out the possibility of so-called bubble fusion. “This experiment is an important step in upscaling sonoluminescence with controlled bubble contents, but it has not yet yielded any sign of fusion,” he says. “The interior of the bubble would need to reach solid densities and temperatures greater than 10 million kelvin.”

Walton at the Cavendish Lab thinks that there is “little real prospect” for fusion in Putterman and colleagues’ experiment, but he does praise the general advance. “It will undoubtedly be of real use in making fundamental studies of the nature of sonoluminescence,” he says.

The research will be published soon in Physical Review E.

Tragic death of US physics student

By James Dacey

I was shocked to hear today about the tragic death of a physics student who was killed earlier this week in a machine shop at Yale University in the US.

Michele Dufault, just 22-years-old, died on Tuesday night after her hair got caught in a lathe as she worked late on a project in one of the university’s chemistry laboratories. In a statement issued on Wednesday, Yale’s vice president, Richard Levin, said that the girl’s body was found by other students who had been working in the building.

On Wednesday evening, the university held a memorial for Dufault in which friends and classmates were invited to light candles and offer words to comfort to each other.

The university said that Dufault was pursuing a B.S. in astronomy and physics, and that she intended to undertake work in oceanography after graduation. “By all reports, Michele was an exceptional young woman, an outstanding student and young scientist, a dear friend and a vibrant member of this community,” said Levin.

According to a report in the New York Times, Dufault died while carrying out experimental work for her thesis: investigating the possible use of liquid helium for detecting dark matter particles. A lathe is a machine tool for shaping metals and other hard materials, and it possesses a heavy spinning wheel for grinding.

Levin said that the safety of students is a paramount concern and the university has programs to train students before they use power equipment. He confirmed, however, that he has ordered a thorough review of all the university’s facilities that contain power equipment operated by undergraduates.

Copyright © 2025 by IOP Publishing Ltd and individual contributors