Skip to main content

The planet hunter

In a special audio interview (below), Michael Banks catches up with astronomer Alan Boss from the Carnegie Institution in Washington to talk about the hunt for a second Earth. Indeed, Boss thinks that Kepler may have already spotted a planet in the 1235 candidates with a similar mass to Earth, which orbits at a distance from its star where life could flourish. “Kepler has a couple of candidates. But we have to wait three or four years to see enough transits to make it believable,” says Boss. “The enticing thing about Kepler is that we know it has the precision to find Earth-like planets.”

Kepler is also teaching us more about the myriad of stars in the universe and Boss says that the mission may even spot the first moon outside our solar system. “Finding a moon would be a bit difficult although one would hope that Kepler could do it”, says the 60-year-old astronomer. “As Kepler can see a lot of gas giant planets, if one of those has an Earth-sized moon to it then it could potentially be seen in the Kepler data.”

Not resting on his laurels, Boss is already planning for the next mission after Kepler to start examining our nearest Earth-like planet and calls for funding to start building such a probe. “Once Kepler shows us that Earth-sized planets are common then there will be more pressure from scientists and the public to say ‘why don’t we go out there and study those that are close’,” says Boss. “Scientists know how to do this and they just need the money.”

Planet hunting

Boss is the author of two books – Looking for Earths: The Race to Find New Solar Systems and The Crowded Universe: The Search for Living Planets. He also wrote a feature on extrasolar planets for the March 2009 issue of Physics World, which marked the 2009 International Year of Astronomy.

Once a physicist: Rob Cook

Why did you decide to study physics?

I became interested in it in high school when I read a book on relativity. I thought it was the most fascinating thing around, and I was hooked.

How did you get into computer graphics?

After I graduated from Duke University in 1973, I was not sure what I wanted to do. However, I had learned to program computers as part of a lab course, so I found a job at the Digital Equipment Corporation in Massachusetts. There was one person there who was doing computer graphics, but he was actually more interested in medical databases, so I said I would do graphics instead. After I got into it, I thought "This is great, this is what I want to do", so I went to Cornell University to get a Master's in computer graphics.

How did you get involved in film?

At that time, images that were made using computer graphics looked really artificial, like plastic, and nobody knew why. It turned out that the model they were using for light reflecting off surfaces was just something someone had made up – it was not based on physics at all. So for my thesis, I used a different model that included the physics of how light reflects off surfaces. The results looked really good: I was able to simulate particular types of materials and really get control over the appearance of the surface. That caught the attention of Lucasfilm, which was just setting up a computer-graphics division, and it hired me.

What inspired you to develop RenderMan?

When you look around, you notice that most things are not just made of one material such as bronze or ivory. They are more complex than that: they have multiple materials, they are beaten up, they have scratches. We needed to give artists control over those surface appearances, so I worked on something called programmable shading that uses equations to describe how a surface looks, but also builds a framework over them to allow artists to make really complex, rich surfaces. That is at the heart of what we do with RenderMan, and over the last 16 years, every film nominated for visual effects at the Academy Awards has used it.

How has your training in physics helped you?

Aside from my thesis work, it also helped when we were developing RenderMan. In computer graphics, you have a virtual camera looking at a virtual world, and for special effects you want to match this with live-action footage. But for it to look convincingly real, you have to get the characteristics of your virtual camera to match those of the physical camera. That turns out to be hard for a number of reasons. One is something called "motion blur": when a physical camera takes a picture, it opens the shutter and a certain amount of time goes by before it closes. During that time, things move, and this causes the image to blur. This blur turns out to be really important for making the motion look smooth, so you have to simulate it in the renderer.

Another thing you have to simulate is the aperture of the lens – the light is not entering the camera in one spot, but all over the lens, and that gives you depth of field. You need to simulate both blur and lens effects, but that means that not only are you integrating the scene around each pixel, you also have to integrate that pixel over time and over the lens and over other things. You end up with this incredibly complex integral, and it turns out that there is a technique in physics called Monte Carlo integration that is perfectly suited to dealing with it.

However, none of this stuff was in the undergraduate curriculum – I had to learn it on my own later. What physics really taught me was how to think about things in a creative and rigorous way. It taught me how to think about hard problems.

Are there other ways that physics is used in animation?

At Pixar, we use physics a lot to simulate the motion of complex things like clothes or hair. But you have to remember that in the animated world, animators do wacky things. They exaggerate enormously in order to tell the story, so the physics has to be reworked so that it can apply to this very non-physical cartoon world. For example, in Monsters Inc there is a scene where Mike is training Sulley to scare twins in a bunk bed. As Sulley is popping from the top to the bottom, there are several g's worth of force on his hair! In the animated world, you can do that, because it looks better. These are the sorts of effects that make us love animation.

What would you like to be able to do with computer graphics that you cannot do now?

One thing that remains a challenge is a simulated human. We have come a long way, but it is hard because our brains are wired to perceive other humans. If the simulated person looks cartoony, we are willing to accept it not being perfect, but as things get more realistic, a part of our brain that responds to people gets involved and then we are very picky about what we see: if things look a little off, they look creepy. That is a challenge, but I think we are almost there.

Any advice for today's physics students?

I always advise people to do something they really love because you are likely to be better at it and you are going to spend a lot of time doing it, so it should be something you genuinely enjoy. I think it is a mistake to decide "I'm going to go into this even though I don't really like it that much because I think it's going to be a good career". It is your life, and you want to spend it doing something you love.

Plants like we have never seen them before

100 supercon.jpg
Gliese 667 is one of two multiple star systems known to host planets below 10 Earth masses. (Courtesy: ESO/L Calçada)

By Tushna Commissariat

If you have thought about planets with two or more suns ever since you saw the dual suns of Tatooine in the first Star Wars film, looks like you are on the same wavelength as some astrobiologists. Jack O'Malley-James, a PhD student at the University of St Andrews, Scotland, has been studying what kind of habitats would exist on Earth-like planets orbiting binary or multiple star systems. He shared his results with peers at the RAS National Astronomy Meeting in Llandudno, Wales on Tuesday 19th April.

O'Malley-James and his team have been running simulations for planets that would orbit multiple star systems and trying to understand the kind of vegetation that might flourish there, depending on the type of stars in the system. Energy via photosynthesis is the foundation for majority of life on Earth, and so it is natural to look for the possibility of photosynthetic processes occurring elsewhere.

With different types of stars occurring in the same system, there would be different spectral sources of light shining on the same planet. Because of this plants may evolve that photosynthesize all types of light, or different plants may choose specific spectral types. The latter would seem more plausible for plants exposed to one particular star for long periods, say the researchers.

Their simulations suggest that planets in multi-star systems may host exotic forms of the plants we see on Earth. “Plants with dim red dwarf suns for example, may appear black to our eyes, absorbing across the entire visible wavelength range in order to use as much of the available light as possible," says O'Malley-James. He also believes the plants may be able to use infrared or ultraviolet radiation to drive photosynthesis.

The team simulated combinations of G-type stars (yellow stars like our Sun) and M-type stars (red-dwarf stars), with a planet identical to Earth, in a stable orbit around the system, within its habitable “Goldilocks zone”. This was because Sun-like stars are known to host exoplanets and red dwarfs are the most common type of star in our galaxy, often found in multi-star systems, and are old and stable enough for life to have evolved.

While the binary systems were not exact copies of any particular observed systems, plenty of M-G star binary systems exist within our own galaxy. O’Malley-James calculated the maximum amount of light per unit area- referred to as the “peak photon flux density” from each of the stars as seen on the planets for each set of simulations. This was compared to the peak photon flux density on Earth to determine whether Earth-like photosynthesis would occur.

Factors like star separation were taken into consideration, to give the best possible scenario for photosynthesis. “We kept the stars as close to the planet as we could, so that there would be a useful photon flux from each one [star] on the planet's surface while still maintaining a stable planetary orbit and a habitable surface temperature,” says O’Malley James.

Predicting crowd behaviour

For event organizers, predicting the highly complex dynamics within large crowds can be an unenviable task. But new computer-modelling research, which treats people as decision-makers rather than passive particles, could help authorities to identify where crowds could become dangerous.

Crowds display a wide variety of behaviours that arise spontaneously from the collective motion of unconnected individuals. For example, people walking in opposite directions along a single passageway tend automatically to divide up into distinct lanes. Then, as the density of pedestrians increases, this smooth motion starts to break down, eventually leading to highly fluctuating motion. On occasion, extreme crowd turbulence has led to fatal crushes, such as the tragic accident at the Love Parade festival in Duisburg, Germany, last year, which left 21 people dead.

To try and understand these behaviours, scientists usually employ a physics-based approach. Pedestrians can be modelled as solid particles that experience an attractive force towards their destination and repulsive forces from other pedestrians and from walls. However, according to Mehdi Moussaïd of the CNRS research centre for animal cognition in Toulouse, France, such physics-based models have a number of shortcomings. These include the ever-increasing complexity of the interaction functions needed to satisfy new data on crowd behaviour, and the limitations imposed by only ever considering interactions between two particles at any one time.

Seeking out empty space

In the new approach, developed by Moussaïd, Guy Theraulaz, also at the CNRS centre in Toulouse, and Dirk Helbing of the Swiss Federal Institute of Technology in Zurich, pedestrians instead alter their motion deliberately, and do so on the basis of what they see. Individuals work out the relative position of surrounding obstacles as they walk and use this information to modify their movement according to two simple principles. The first of these is to walk in the direction that provides the most direct obstacle-free path to the destination, and the second is to walk at a speed that allows a certain minimum time to react to potential collisions. As Moussaïd puts it, "physics-based modelling represents the tendency to move away from others, while our cognitive model represents the tendency to seek out empty space".

To see how well their model stood up to empirical data, the researchers first tested it against laboratory observations of how two individual pedestrians avoid one another. They found that the predictions and data were in good agreement, both for the case in which the two pedestrians were moving in opposite directions and when one was stationary. Next, they tested the model against collective phenomena, and found that it correctly predicted the spontaneous dividing up of opposing flows. They also found that as they increased the density of pedestrians in their model, the resulting decline in the average speed of walkers was in line with real-life observations, carried out in Toulouse.

Increasing the density still further, Moussaïd and colleagues found that the model predicted the distinctive transitions to more disordered behaviour – stop-and-go waves and then turbulence. But because at these high densities people are in close contact with their neighbours and can be pushed and pulled about against their will, the researchers added a purely physical interaction term to their equations that kicked in once the density was high enough. With this extra term in place, the model was able to predict the extent of crushing around bottlenecks in passageways and the resulting stress releases that, say the researchers, can produce "earthquake-like" movements of many individuals in multiple directions. In particular, the researchers found that their results agreed closely with images of crowd turbulence that happened to be captured by a surveillance camera during the 2006 hajj pilgrimage to Mecca.

Foreseeing bottlenecks

According to Moussaïd, the new model could have a number of practical uses. One might be in the layout of sites for mass events, with the model predicting which bottlenecks, such as those around entrances and exits, could prove the most dangerous. Also, using the model to analyse real-time footage of crowd movements could give organizers vital minutes to try and restore order before the situation deteriorates. The researchers also point out that the visual basis of their model makes it particularly well suited to studying how people can be evacuated when there is reduced visibility, as would be the case in a smoke-filled room.

László Barabasi of Northeastern University in the US believes that the Franco-Swiss researchers "offer a compelling argument" that combining physical and cognitive elements within a single model "is an excellent new avenue to both individual and crowd modelling". He adds that it would be "particularly interesting to see if this paradigm can be extended to other aspects of human dynamics – from the timing of human interactions to large-scale travel patterns."

The research has been published online on the website of the Proceedings of the National Academy of Sciences.

Solar power without solar cells

Physicists in the US believe that it is possible to generate solar power without solar cells. Their "optical battery" idea, which would involve performing the energy conversion inside insulators rather than semiconductors, could make for a far cheaper alternative energy source than existing solar-cell technologies.

In conventional solar cells, electricity is generated by simple charge separation. The semiconductor absorbs a photon of sunlight, knocking a negative electron into the material's conduction energy band and leaving a positive hole in its place. With these two charges separated, a voltage is produced from which power can be drawn.

But solar power need not be generated in this way, according to Stephen Rand and William Fisher of the University of Michigan. Rand and Fisher have performed calculations to predict that voltages can be generated in insulating materials, using what they say is a previously overlooked aspect of light's magnetic field. "You could stare at the equations of motion all day and you will not see this possibility," says Rand.

Overlooked magnetic field

Light is an electromagnetic wave, which means that it has two components – an electric field and a magnetic field. In free space, the magnetic field is some eight orders of magnitude weaker than the electric field, almost so weak as to be negligible. Once it enters a material, then, the electric field accelerates charges – electrons – in its direction. Physicists had thought that the magnetic field would affect the dynamics of the electrons only when they approach very high "relativistic" speeds, close to the speed of light.

But Rand and Fisher have calculated that when electrons are bound to their nuclei, as they are in insulators, the electric and magnetic dynamics of the electron become linked, allowing energy to pass from one to the other. The result is that when light shines on an insulator, the magnetic field alone can shift electrons in the direction of the light, creating a polarization of charge. This acts much like an optical capacitor, which can be tapped for electricity – perhaps at efficiencies of around 10%.

"This method needs only simple dielectrics such as glass rather than highly processed semiconductors found in photovoltaic cells," Fisher told physicsworld.com, adding that insulators in the shape of fibres would enhance the effect. "Glass is simpler and less expensive to manufacture; miles of glass fibre is pulled every day for fibre optics already."

Not yet practical

Currently, Rand and Fisher's work is mostly theoretical, and they think that the light would have to be focused to very high intensities of at least 10 million watts per square centimetre. However, they say that further experiments might reveal materials that work at lower intensities.

James Heyman, a semiconductor physicist at Macalester College in Minnesota, US, calls the work "interesting" but notes several potential drawbacks. "I don't see evidence yet that the phenomena they study also occur at intensities corresponding to focused sunlight," he says. And the projected efficiency already lags behind crystalline solar cells, he adds, which sell for about one dollar per watt in the US.

Still, the Michigan researchers have high hopes, even if, according to Fisher, it will be "many years" until optical batteries are on sale. "If we can develop a material that exhibits a strong effect at lower intensity and still be pulled as a fibre, the industrial capability already exists," he says.

The research is published in the Journal of Applied Physics.

Superconductivity: a far-reaching theory

Superconductivity theory has a history of stretching beyond its traditional boundaries into other areas of physics. For instance, some even believe that the whole cosmos can be conceptualized as a giant superconductor to help explain interactions involving the weak force.

In this interview with physicsworld.com, Frank Wilczek of the Massachusetts Institute of Technology discusses superconductivity and how its impact is felt across seemingly disparate areas of physics.

"It's a rich mix that the theory of superconductivity has given us," he says, referring to concepts such as pairing and symmetry breaking as applied to topology. "All those ideas really have their deep roots in work on superconductivity and they've become dominant tools for fundamental physics."

Wilczek, who shared the 2004 Nobel Prize for Physics for the discovery of asymptotic freedom in the theory of the strong interaction, also discusses more recent physics to benefit from the insights of superconductivity. He describes how the burgeoning field of topological insulators represents "a marvellous embodiment of concepts" from superconductivity and quantum field theory.

You can read more about the discoveries, theories and impacts of superconductivity in the April issue of Physics World. This special issue celebrates the centenary of superconductivity and it can be downloaded free of charge.

Sticky fingers no more

100 supercon.jpg
Chocolate that doesn't melt until as high as 50C could be a boon for chocoholics in the tropics. (Courtesy:iStockphoto.com/alle12)

By Tushna Commissariat

"Look, there's no metaphysics on Earth like chocolates" – Fernando Pessoa, Portuguese poet

It’s Easter again and shops in many countries are full of chocolate eggs and other gooey, chocolate-based treats. But why is it that certain tropical countries like Nigeria consume only small amounts of chocolate, despite producing most of the world's cocoa? Indeed, nearly 70% of cocoa is grown in West Africa and the rest in Central and South America and Asia.

One of the main reasons is that the high tropical temperatures make chocolate lose its form while being transported within these areas. The chocolate can also undergo a “bloom formation” – a mouldy-looking white coating that forms on the surface resulting from an increase in temperature, which makes storing chocolate a problem.

But, once again, physics is providing a solution. Scientists have been looking at ways to create a “thermo-resistant” chocolate that holds its form and still tastes just right. And it looks like O Ogunwolu and C O Jayeola, food scientists at the Cocoa Research Institute of Nigeria have finally managed it, and just in time for Easter too. They found that adding varying amounts of cornstarch and gelatin to chocolate ensured that the chocolate melted at about 40–50 °C, instead of its normal melting point at about 25–33 °C. And the best bit is that, by all accounts, it still looks and tastes like normal chocolate!

The chocolate industry has been looking into ways of perfecting heat-resistant chocolate for a long time. For example, the US company Hershey’s developed a chocolate bar that was heat resistant and could be used as part of emergency rations for American troops during the Second World War. The down side was that, according to troop reports, the chocolate tasted "a little better than a boiled potato". While Hershey’s did try other recipes and even managed to make a bar that melted only at 60 °C, reactions to the taste were mixed. So it is hoped that this new recipe will make chocolate more available to everyone, the world over, as it should be.

In other chocolate-related news, take a look at the slew of videos on YouTube that include researchers at the University of Nottingham conduct “Eggsperiments” with Cadbury's Creme Eggs. My favourite one has chemists making quite a mess in their labs when they try to deconstruct the eggs.

Mapping orbits within black holes

The words "black hole" generally bring to mind destruction and an end to all ends. No-one – in fact or fiction – has considered the possibility of stable habitats existing within black holes. But that is precisely what physicist Vyacheslav I Dokuchaev of the Russian Academy of Sciences, Moscow, is suggesting in his new paper, "Is there life in black holes?". Published in the Journal of Cosmology and Astroparticle Physics, Dokuchaev suggests that certain types of black hole contain stable orbits for photons within their interior that might even allow planets to survive.

Essentially, a black hole is a place where gravitational forces are so extreme that everything is sucked into them – including light. They have outer boundaries, known as event horizons, beyond which nothing can escape because matter starts moving at faster-than-light speeds. But charged, rotating black holes – known as "Kerr–Newman black holes" – exhibit an unexpected twist. They have not only an outer event horizon but also an inner horizon, called a "Cauchy" horizon. At this Cauchy horizon, because of the centrifugal forces involved, particles slow down back to the speed of light.

The final frontier

Since the 1960s, researchers have determined stable orbits for photons inside these charged, rotating black holes. In his new paper, Dokuchaev has looked at stable circular orbits as well as spherical, non-equatorial orbits for photons at the inner boundary. He concludes that there is no reason that larger bodies, such as planets, could not do the same. He even suggests that entire advanced civilizations could live inside this particular subset of black holes, on planets that orbit stably inside the hole – using the naked singularity as a source of energy. They would forever be shielded from the outside and not sucked into the singularity itself, he says.

In theory it should be possible to use the singularity as an energy source explains Andrew Hamilton, an astrophysicist at the University of Colorado in the US who has also calculated the orbits at the inner horizon inside these black holes. "A rotating black hole acts like a giant flywheel. A civilization can tap the rotational energy of the black hole by playing clever games of orbital billiards, something first pointed out by Roger Penrose," he says.

However, Hamilton believes that, in reality, the situation is implausible. Inflation at the inner horizon would cause space–time to collapse, not to mention disturbances created by the high energy density at such a location, from massive amounts of matter falling into the black hole. On the whole, none of these circumstances would make for habitable conditions. Dokuchaev himself acknowledges these problems in his paper, but does not provide a solution.

Paradoxes and information losses

Even if a planet and then a civilization were to form inside these black holes, it would be almost impossible to discover them because all information is lost going into or coming out of a black hole. Although new theories state that information from the interior of black holes is encoded in the Hawking radiation emitted from them, this information could quite possibly be scrambled.

Arthur I Miller, a physicist and author of several popular-science books, believes that it is pointless to look at any possibility of life inside black holes, stable orbits notwithstanding. "It is, indeed, extreme science fiction to imagine the existence of worlds in them. Surely it would be a 'crushing experience' living inside a black hole?" he says.

So, while most scientists will agree that looking for life inside black holes is a futile venture, the sad truth is that we will never know if the real-estate market is missing out on a great new platform.

Resistance is futile

My involvement with high-temperature superconductors began in the autumn of 1986, when a student in my final-year course on condensed-matter physics at the University of Birmingham asked me what I thought about press reports concerning a new superconductor. According to the reports, two scientists working in Zurich, Switzerland – J Georg Bednorz and K Alex Müller – had discovered a material with a transition temperature, Tc, of 35 K – 50% higher than the previous highest value of 23 K, which had been achieved more than a decade earlier in Nb3Ge.

In those days, following this up required a walk to the university library to borrow a paper copy of the appropriate issue of the journal Zeitschrift für Physik B. I reported back to the students that I was not convinced by the data, since the lowest resistivity that Bednorz and Müller (referred to hereafter as "B&M") had observed might just be comparable with that of copper, rather than zero. In any case, the material only achieved zero resistivity at ~10 K, even though the drop began at the much higher temperature of 35 K (figure 1).

In addition, the authors had not, at the time they submitted the paper in April 1986, established the composition or crystal structure of the compound they believed to be superconducting. All they knew was that their sample was a mixture of different phases containing barium (Ba), lanthanum (La), copper (Cu) and oxygen (O). They also lacked the equipment to test whether the sample expelled a magnetic field, which is a more fundamental property of superconductors than zero resistance, and is termed the Meissner effect. No wonder B&M had carefully titled their paper "Possible high Tc superconductivity in the Ba–La–Cu–O system" (my italics).

My doubt, and that of many physicists, was caused by two things. One was a prediction made in 1968 by the well-respected theorist Bill McMillan, who proposed that there was a natural upper limit to the possible Tc for superconductivity – and that we were probably close to it. The other was the publication in 1969 of Superconductivity, a two-volume compendium of articles by all the leading experts in the field. As one of them remarked, this book would represent "the last nail in the coffin of superconductivity", and so it seemed: many people left the subject after that, feeling that everything important had already been done in the 58 years since its discovery.

In defying this conventional wisdom, B&M based their approach on the conviction that superconductivity in conducting oxides had been insufficiently exploited. They hypothesized that such materials might harbour a stronger electron–lattice interaction, which would raise the Tc according to the theory of superconductivity put forward by John Bardeen, Leon Cooper and Robert Schrieffer (BCS) in 1957 (see "The BCS theory of superconductivity" below). For two years B&M worked without success on oxides that contained nickel and other elements. Then they turned to oxides containing copper – cuprates – and the results were as the Zeitschrift für Physik B paper indicated: a tantalizing drop in resistivity.

What soon followed was a worldwide rush to build on B&M's discovery. As materials with still higher Tc were found, people began to feel that the sky was the limit. Physicists found a new respect for oxide chemists as every conceivable technique was used first to measure the properties of these new compounds, and then to seek applications for them. The result was a blizzard of papers. Yet even after an effort measured in many tens of thousands of working years, practical applications remain technically demanding, we still do not properly understand high-Tc materials and the mechanism of their superconductivity remains controversial.

The ball starts rolling

Although I was initially sceptical, others were more accepting of B&M's results. By late 1986 Paul Chu's group at the University of Houston, US, and Shoji Tanaka's group at Tokyo University in Japan had confirmed high-Tc superconductivity in their own Ba–La–Cu–O samples, and B&M had observed the Meissner effect. Things began to move fast: Chu found that by subjecting samples to about 10,000 atmospheres of pressure, he could boost the Tc up to ~50 K, so he also tried "chemical pressure" – replacing the La with the smaller ion yttrium (Y). In early 1987 he and his collaborators discovered superconductivity in a mixed-phase Y–Ba–Cu–O sample at an unprecedented 93 K – well above the psychological barrier of 77 K, the boiling point of liquid nitrogen. The publication of this result at the beginning of March 1987 was preceded by press announcements, and suddenly a bandwagon was rolling: no longer did superconductivity need liquid helium at 4.2 K or liquid hydrogen at 20 K, but instead could be achieved with a coolant that costs less than half the price of milk.

Chu's new superconducting compound had a rather different structure and composition than the one that B&M had discovered, and the race was on to understand it. Several laboratories in the US, the Netherlands, China and Japan established almost simultaneously that it had the chemical formula YBa2Cu3O7–d, where the subscript 7–d indicates a varying content of oxygen. Very soon afterwards, its exact crystal structure was determined, and physicists rapidly learned the word "perovskite" to describe it (see "The amazing perovskite family" below). They also adopted two widely used abbreviations, YBCO and 123 (a reference to the ratios of Y, Ba and Cu atoms) for its unwieldy chemical formula.

The competition was intense. When the Dutch researchers learned from a press announcement that Chu's new material was green, they deduced that the new element he had introduced was yttrium, which can give rise to an insulating green impurity with the chemical formula Y2BaCuO5. They managed to isolate the pure 123 material, which is black in colour, and the European journal Physica got their results into print first. However, a group from Bell Labs was the first to submit a paper, which was published soon afterwards in the US journal Physical Review Letters. This race illustrates an important point: although scientists may high-mindedly and correctly state that their aim and delight is to discover the workings of nature, the desire to be first is often a very strong additional motivation. This is not necessarily for self-advancement, but for the buzz of feeling (perhaps incorrectly in this case) "I'm the only person in the world who knows this!".

"The Woodstock of physics"

For high-Tc superconductivity, the buzz reached fever pitch at the American Physical Society's annual "March Meeting", which in 1987 was held in New York. The week of the March Meeting features about 30 gruelling parallel sessions from dawn till after dusk, where a great many condensed-matter physicists present their latest results, fill postdoc positions, gossip and network. The programme is normally fixed months in advance, but an exception had to be made that year and a "post-deadline" session was rapidly organized for the Wednesday evening in the ballroom of the Hilton Hotel. This space was designed to hold 1100 people, but in the event it was packed with nearly twice that number, and many others observed the proceedings on video monitors outside.

Müller and four other leading researchers gave talks greeted with huge enthusiasm, followed by more than 50 five-minute contributions, going on into the small hours. This meeting gained the full attention of the press and was dubbed "the Woodstock of physics" in recognition of the euphoria it generated – an echo of the famous rock concert held in upstate New York in 1969. The fact that so many research groups were able to produce results in such a short time indicated that the B&M and Chu discoveries were "democratic", meaning that anyone with access to a small furnace (or even a pottery kiln) and a reasonable understanding of solid-state chemistry could confirm them.

With so many people contributing, the number of papers on superconductivity shot up to nearly 10,000 in 1987 alone. Much information was transmitted informally: it was not unusual to see a scientific paper with "New York Times, 16 February 1987" among the references cited. The B&M paper that began it all has been cited more than 8000 times and is among the top 10 most cited papers of the last 30 years. It is noteworthy that nearly 10% of these citations include misprints, which may be because of the widespread circulation of faxed photocopies of faxes. One particular misprint, an incorrect page number, occurs more than 250 times, continuing to the present century. We can trace this particular "mutant" back to its source: a very early and much-cited paper by a prominent high-Tc theorist. Many authors have clearly copied some of their citations from the list at the end of this paper, rather than going back to the originals. There have also been numerous sightings of "unidentified superconducting objects" (USOs), or claims of extremely high transition temperatures that could not be reproduced. One suspects that some of these may have arisen when a voltage lead became badly connected as a sample was cooled; of course, this would cause the voltage measured across a current-carrying sample to drop to zero.

Meanwhile, back in Birmingham, Chu's paper was enough to persuade us that high-Tc superconductivity was real. Within the next few weeks, we made our own superconducting sample at the second attempt, and then hurried to measure the flux quantum – the basic unit of magnetic field that can thread a superconducting ring. According to the BCS theory of superconductivity, this flux quantum should have the value h/2e, with the factor 2 representing the pairing of conduction electrons in the superconductor. This was indeed the value we found (figure 2). We were amused that the accompanying picture of our apparatus on the front cover of Nature included the piece of Blu-Tack we used to hold parts of it together – and pleased that when B&M were awarded the 1987 Nobel Prize for Physics (the shortest gap ever between discovery and award), our results were reproduced in Müller's Nobel lecture.

Unfinished business

In retrospect, however, our h/2e measurement may have made a negative contribution to the subject, since it could be taken to imply that high-Tc superconductivity is "conventional" (i.e. explained by standard BCS theory), which it certainly is not. Although B&M's choice of compounds was influenced by BCS theory, most (but not all) theorists today would say that the interaction that led them to pick La–Ba–Cu–O is not the dominant mechanism in high-Tc superconductivity. Some of the evidence supporting this conclusion came from several important experiments performed in around 1993, which together showed that the paired superconducting electrons have l = 2 units of relative angular momentum. The resulting wavefunction has a four-leaf-clover shape, like one of the d-electron states in an atom, so the pairing is said to be "d-wave". In such l = 2 pairs, "centrifugal force" tends to keep the constituent electrons apart, so this state is favoured if there is a short-distance repulsion between them (which is certainly the case in cuprates). This kind of pairing is also favoured by an anisotropic interaction expected at larger distances, which can take advantage of the clover-leaf wavefunction. In contrast, the original "s-wave" or l = 0 pairing described in BCS theory would be expected if there is a short-range isotropic attraction arising from the electron–lattice interaction.

These considerations strongly indicate that the electron–lattice interaction (which in any case appears to be too weak) is not the cause of the high Tc. As for the actual cause, opinion tends towards some form of magnetic attraction playing a role, but agreement on the precise mechanism has proved elusive. This is mainly because the drop in electron energy on entering the superconducting state is less than 0.1% of the total energy (which is about 1 eV), making it extremely difficult to isolate this change.

On the experimental side, the maximum Tc has been obstinately stuck at about halfway to room temperature since the early 1990s. There have, however, been a number of interesting technical developments. One is the discovery of superconductivity at 39 K in magnesium diboride (MgB2), which was made by Jun Akimitsu in 2001. This compound had been available from chemical suppliers for many years, and it is interesting to speculate how history would have been different if its superconductivity had been discovered earlier. It is now thought that MgB2 is the last of the BCS superconductors, and no attempts to modify it to increase the Tc further have been successful. Despite possible applications of this material, it seems to represent a dead end.

In the same period, other interesting families of superconductors have also been discovered, including the organics and the alkali-metal-doped buckyball series. None, however, have raised as much excitement as the development in 2008 (by Hideo Hosono's group at Tokyo University) of an iron-based superconductor with Tc above 40 K. Like the cuprate superconductors before them, these materials also have layered structures, typically with iron atoms sandwiched between arsenic layers, and have to be doped to remove antiferromagnetism. However, the electrons in these materials are less strongly interacting than they are in the cuprates, and because of this, theorists believe that they will be an easier nut to crack. A widely accepted model posits that the electron pairing mainly results from a repulsive interaction between two different groups of carriers, rather than attraction between carriers within a group. Even though the Tc in these "iron pnictide" superconductors has so far only reached about 55 K, the discovery of these materials is a most interesting development because it indicates that we have not yet scraped the bottom of the barrel for new mechanisms and materials for superconductivity, and that research on high-Tc superconductors is still a developing field.

A frictionless future?

So what are the prospects for room-temperature superconductivity? One important thing to remember is that even supposing we discover a material with Tc ~ 300 K, it would still not be possible to make snooker tables with levitating frictionless balls, never mind the levitating boulders in the film Avatar. Probably 500 K would be needed, because we observe and expect that as Tc gets higher, the electron pairs become smaller. This means that thermal fluctuations become more important, because they occur in a smaller volume and can more easily lead to a loss of the phase coherence essential to superconductivity. This effect, particularly in high magnetic fields, is already important in current high-Tc materials and has led to a huge improvement in our understanding of how lines of magnetic flux "freeze" in position or "melt" and move, which they usually do near to Tc, and give rise to resistive dissipation.

Another limitation, at least for the cuprates, is the difficulty of passing large supercurrents from one crystal to the next in a polycrystalline material. This partly arises from the fact that in such materials, the supercurrents only flow well in the copper-oxide planes. In addition, the coupling between the d-wave pairs in two adjacent crystals is very weak unless the crystals are closely aligned so that the lobes of their wavefunctions overlap. Furthermore, the pairs are small, so that even the narrow boundaries between crystal grains present a barrier to their progress. None of these problems arise in low-Tc materials, which have relatively large isotropic pairs.

For high-Tc materials, the solution, developed in recent years, is to form a multilayered flexible tape in which one layer is an essentially continuous single crystal of 123 (figure 3). Such tapes are, however, expensive because of the multiple hi-tech processes involved and because, unsurprisingly, ceramic oxides cannot be wound around sharp corners. It seems that even in existing high-Tc materials, nature gave with one hand, but took away with the other, by making the materials extremely difficult to use in practical applications.

Nevertheless, some high-Tc applications do exist or are close to market. Superconducting power line "demonstrators" are undergoing tests in the US and Russia, and new cables have also been developed that can carry lossless AC currents of 2000 A at 77 K. Such cables also have much higher current densities than conventional materials when they are used at 4.2 K in high-field magnets. Superconducting pick-up coils already improve the performance of MRI scanners, and superconducting filters are finding applications in mobile-phone base stations and radio astronomy.

In addition to the applications, there are several other positive things that have arisen from the discovery of high-Tc superconductivity, including huge developments in techniques for the microscopic investigation of materials. For example, angle-resolved photo-electron spectroscopy (ARPES) has allowed us to "see" the energies of occupied electron states in ever-finer detail, while neutron scattering is the ideal tool with which to reveal the magnetic properties of copper ions. The advent of high-Tc superconductors has also revealed that the theoretical model of weakly interacting electrons, which works so well in simple metals, needs to be extended. In cuprates and many other materials investigated in the last quarter of a century, we have found that the electrons cannot be treated as a gas of almost independent particles.

The result has been new theoretical approaches and also new "emergent" phenomena that cannot be predicted from first principles, with unconventional superconductivity being just one example. Other products of this research programme include the fractional quantum Hall effect, in which entities made of electrons have a fractional charge; "heavy fermion" metals, where the electrons are effectively 100 times heavier than normal; and "non-Fermi" liquids in which electrons do not behave like independent particles. So is superconductivity growing old after 100 years? In a numerical sense, perhaps – but quantum mechanics is even older if we measure from Planck's first introduction of his famous constant, yet both are continuing to spring new surprises (and are strongly linked together). Long may this continue!

The BCS theory of superconductivity

Although superconductivity was observed for the first time in 1911, there was no microscopic theory of the phenomenon until 1957, when John Bardeen, Leon Cooper and Robert Schrieffer made a breakthrough. Their "BCS" theory – which describes low-temperature superconductivity, though it requires modification to describe high-Tc – has several components. One is the idea that electrons can be paired up by a weak interaction, a phenomenon now known as Cooper pairing. Another is that the "glue" that holds electron pairs together, despite their Coulomb repulsion, stems from the interaction of electrons with the crystal lattice – as described by Bardeen and another physicist, David Pines, in 1955. A simple way to think of this interaction is that an electron attracts the positively charged lattice and slightly deforms it, thus making a potential well for another electron. This is rather like two sleepers on a soft mattress, who each roll into the depression created by the other. It is this deforming response that caused Bill McMillan to propose in 1968 that there should be a maximum possible Tc: if the lectron–lattice interaction is too strong, the crystal may deform to a new structure instead of becoming superconducting.

The third component of BCS theory is the idea that all the pairs of electrons are condensed into the same quantum state as each other – like the photons in a coherent laser beam, or the atoms in a Bose–Einstein condensate. This is possible even though individual electrons are fermions and cannot exist in the same state as each other, as described by the Pauli exclusion principle. This is because pairs of electrons behave somewhat like bosons, to which the exclusion principle does not apply. The wavefunction incorporating this idea was worked out by Schrieffer (then a graduate student) while he was sitting in a New York subway car.

Breaking up one of these electron pairs requires a minimum amount of energy, Δ, per electron. At non-zero temperatures, pairs are constantly being broken up by thermal excitations. The pairs then re-form, but when they do so they can only rejoin the state occupied by the unbroken pairs. Unless the temperature is very close to Tc (or, of course, above it) there is always a macroscopic number of unbroken pairs, and so thermal excitations do not change the quantum state of the condensate. It is this stability that leads to non-decaying supercurrents and to superconductivity. Below Tc, the chances of all pairs getting broken at the same time are about as low as the chances that a lump of solid will jump in the air because all the atoms inside it are, coincidentally, vibrating in the same direction. In this way, the BCS theory successfully accounted for the behaviour of "conventional" low-temperature superconductors such as mercury and tin.

It was soon realized that BCS theory can be generalized. For instance, the pairs may be held together by a different interaction than that between electrons and a lattice, and two fermions in a pair may have a mutual angular momentum, so that their wavefunction varies with direction – unlike the spherically symmetric, zero-angular-momentum pairs considered by BCS. Materials with such pairings would be described as "unconventional superconductors". However, there is one aspect of superconductivity theory that has remained unchanged since BCS: we do not know of any fermion superconductor without pairs of some kind.

The amazing perovskite family

Perovskites are crystals that have long been familiar to inorganic chemists and mineralogists in contexts other than superconductivity. Perovskite materials containing titanium and zirconium, for example, are used as ultrasonic transducers, while others containing manganese exhibit very strong magnetic-field effects on their electrical resistance ("colossal magnetoresistance"). One of the simplest perovskites, strontium titanate (SrTiO3), is shown in the top image (right). In this material, Ti4+ ions (blue) are separated by O2– ions (red) at the corners of an octahedron, with Sr2+ ions (green) filling the gaps and balancing the charge.

Bednorz and Müller (B&M) chose to investigate perovskite-type oxides (a few of which are conducting) because of a phenomenon called the Jahn–Teller effect, which they believed might provide an increased interaction between the electrons and the crystal lattice. In 1937 Hermann Arthur Jahn and Edward Teller predicted that if there is a degenerate partially occupied electron state in a symmetrical environment, then the surroundings (in this case the octahedron of oxygen ions around copper) would spontaneously distort to remove the degeneracy and lower the energy. However, most recent work indicates that the electron–lattice interaction is not the main driver of superconductivity in cuprates – in which case the Jahn–Teller theory was only useful because it led B&M towards these materials!

The most important structural feature of the cuprate perovskites, as far as superconductivity is concerned, is the existence of copper-oxide layers, where copper ions in a square array are separated by oxygen ions. These layers are the location of the superconducting carriers, and they must be created by varying the content of oxygen or one of the other constituents – "doping" the material. We can see how this works most simply in B&M's original compound, which was La2CuO4 doped with Ba to give La2–xBaxCuO4 (x ~ 0.15 gives the highest Tc). In ionic compounds, lanthanum forms La3+ ions, so in La2CuO4 the ionic charges all balance if the copper and oxygen ions are in their usual Cu2+ (as in the familiar copper sulphate, CuSO4) and O2– states. La2CuO4 is insulating even though each Cu2+ ion has an unpaired electron, as these electrons do not contribute to electrical conductivity because of their strong mutual repulsion. Instead, they are localized, one to each copper site, and their spins line up antiparallel in an antiferromagnetic state. If barium is incorporated, it forms Ba2+ ions, so that the copper and oxygen ions can no longer have their usual charges, thus the material becomes "hole-doped", the antiferromagnetic ordering is destroyed and the material becomes both a conductor and a superconductor. YBa2Cu3O7–d or "YBCO" (bottom right) behaves similarly, except that there are two types of copper ions, inside and outside the CuO2 planes, and the doping is carried out by varying the oxygen content. This material contains Y3+ (yellow) and Ba2+ (purple) ions, copper (blue) and oxygen (red) ions. When d ~0.03, the hole-doping gives a maximum Tc; when d is increased above ~0.7, YBCO becomes insulating and antiferromagnetic.

Taking the multiverse on faith

The Grand Design begins with a series of questions: "How can we understand the world in which we find ourselves?", "How does the universe behave?", "What is the nature of reality?", "Where did all this come from?" and "Did the universe need a creator?". As the book's authors, Stephen Hawking and Leonard Mlodinow, point out, "almost all of us worry about [these questions] some of the time", and over the millennia, philosophers have worried about them a great deal. Yet after opening their book with an entertaining history of philosophers' takes on these fundamental questions, Hawking and Mlodinow go on to state provocatively that philosophy is dead: since philosophers have not kept up with the advances of modern science, it is now scientists who must address these large questions.

Much of the rest of the book is therefore devoted to a description of the authors' own philosophy, an interpretation of the world that they call "model-dependent realism". They argue that different models of the universe can be constructed using mathematics and tested experimentally, but that no one model can be claimed as a true description of reality. This idea is not new; indeed, the Irish philosopher and bishop George Berkeley hinted at it in the 18th century. However, Hawking and Mlodinow take Berkeley's idea to extremes by claiming that since many models of nature can exist that describe the experimental data equally well, such models are therefore equally valid.

It is important to the argument of the book – which leads eventually to more exotic models such as M-theory and the multiverse – that readers accept the premise of model-dependent realism. However, the history of science shows that the premise of one model being as good and useful as another is not always correct. Paradigms shift because a new model not only fits the current observational data as well as (or better than) an older model, but also makes predictions that fit new data that cannot be explained by the older model. Hawking and Mlodinow's assertion that "there is no picture- or theory-independent concept of reality" thus flies in the face of one of the basic tenets of the scientific method.

Consider the Ptolemaic model of the solar system, in which the planets move in circular orbits around the Earth, and the heliocentric model put forward by Copernicus. The authors suggest that the two models can be made to fit the astronomical data equally well, but that the heliocentric model is a simpler and more convenient one to use. Yet this does not make them equivalent. New data differentiated them: Galileo's observation of the phases of Venus, through his telescope, cannot easily be explained in Ptolemy's Earth-centred system. Similarly, Einstein's theory of gravity superseded Newton's laws of gravitation when its equations correctly described Mercury's anomalous orbit. One theory, one perception of reality, is not just as good as another, and this can be shown empirically: Einstein's gravity is even used to make corrections to Newton's in the Global Positioning System.

It is true, however, that the situation in quantum mechanics has not yet been resolved. Several different models, such as the "many worlds" interpretation of Hugh Everett III, the Copenhagen interpretation and certain Bohmian hidden-variable models, all agree with quantum-mechanical experiments, and as yet none of the interpretations has produced a prediction that would experimentally differentiate them. Based on the history of science, however, we have no reason to assume that in the future there will not be a decisive experiment that will support one model over the others.

A second premise that the reader is expected to accept as The Grand Design moves along is that we can, and should, apply quantum physics to the macroscopic world. To support this premise, Hawking and Mlodinow cite Feynman's probabilistic interpretation of quantum mechanics, which is based on his "sum over histories" of particles. Basic to this interpretation is the idea that a particle can take every possible path connecting two points. Extrapolating hugely, the authors then apply Feynman's formulation of quantum mechanics to the whole universe: they announce that the universe does not have a single history, but every possible history, each one with its own probability.

This statement effectively wipes out the widely accepted classical model of the large-scale structure of the universe, beginning with the Big Bang. It also leads to the idea that there are many possible, causally disconnected universes, each with its own different physical laws, and we occupy a special one that is compatible with our existence and our ability to observe it. Thus, in one fell swoop the authors embrace both the "multiverse" and the "anthropic principle" – two controversial notions that are more philosophic than scientific, and likely can never be verified or falsified.

Another key component of The Grand Design is the quest for the so-called theory of everything. When Hawking became Lucasian Professor of Mathematics at Cambridge University – the chair held by, among others, Newton and Paul Dirac – he gave an inaugural speech claiming that we were close to "the end of physics". Within 20 years, he said, physicists would succeed in unifying the forces of nature, and unifying general relativity with quantum mechanics. He proposed that this would be achieved through supergravity and its relation, string theory. Only technical problems, he stated, meant that we were not yet able to prove that supergravity solved the problem of how to make quantum-gravity calculations finite.

But that was in 1979, and Hawking's vision of that theory of everything is still in limbo. Underlying his favoured "supergravity" model is the postulate that, in addition to the known observable elementary particles in particle physics, there exist superpartners, which differ from the known particles by a one-half unit of quantum spin. None of these particles has been detected to date in high-energy accelerator experiments, including those recently carried out at the Large Hadron Collider at CERN. Yet despite this, Hawking has not given up on a theory of everything – or has he?

After an entertaining description of the Standard Model of particle physics and various attempts at unification, Hawking and his co-author conclude that there is indeed a true theory of everything, and its name is "M-theory". Of course, no-one knows what the "M" in M-theory stands for, although "master", "miracle" and "mystery" have been suggested. Nor can anyone convincingly describe M-theory, except that it supposedly exists in 11 dimensions and contains string theory in 10 dimensions. A problem from the outset with this incomplete theory is that one must hide, or compactify, the extra seven dimensions in order to yield the three spatial dimensions and one time dimension that we inhabit. There is a possibly infinite number of ways to perform this technical feat. As a result of this, there is a "landscape" of possible solutions to M-theory, 10500 by one count, which for all practical purposes also approaches infinity.

That near-infinity of solutions might be seen by some as a flaw in M-theory, but Hawking and Mlodinow seize upon this controversial aspect of it to claim that "the physicist's traditional expectation of a single theory of nature is untenable, and there exists no single formulation". Even more dramatically, they state that "the original hope of physicists to produce a single theory explaining the apparent laws of our universe as the unique possible consequence of a few simple assumptions has to be abandoned". Still, the old dream persists, albeit in a modified form. The difference, as Hawking and Mlodinow assert pointedly, is that M-theory is not one theory, but a network of many theories.

Apparently unconcerned that theorists have not yet succeeded in explaining M-theory, and that it has not been possible to test it, the authors conclude by declaring that they have formulated a cosmology based on it and on Hawking's idea that the early universe is a 4D sphere without a beginning or an end (the "no-boundary theory"). This cosmology is the "grand design" of the title, and one of its predictions is that gravity causes the universe to create itself spontaneously from nothing. This somehow explains why we exist. At this point, Hawking and Mlodinow venture into religious controversy, proclaiming that "it is not necessary to invoke God to light the blue touch paper and set the universe going".

Near the end of the book, the authors claim that for a theory of quantum gravity to predict finite quantities, it must possess supersymmetry between the forces and matter. They go on to say that since M-theory is the most general supersymmetric theory of gravity, it is the only candidate for a complete theory of the universe. Since there is no other consistent model, then we must be part of the universe described by M-theory. Early in the book, the authors state that an acceptable model of nature must agree with experimental data and make predictions that can be tested. However, none of the claims about their "grand design" – or M-theory or the multiverse – fulfils these demands. This makes the final claim of the book – "If the theory is confirmed by observation, it will be the successful conclusion of a search going back 3000 years" – mere hyperbole. With The Grand Design, Hawking has again, as in his inaugural Lucasian Professor speech, made excessive claims for the future of physics, which as before remain to be substantiated.

Copyright © 2025 by IOP Publishing Ltd and individual contributors