Skip to main content

Changes spotted in fundamental constant

Billions of years ago the strength of the electromagnetic interaction was different at opposite ends of universe. That’s the surprising conclusion of a group of physicists in Australia, who have studied light from ancient quasars. The researchers found that the fine-structure constant, known as α, has changed in both space and time since the Big Bang.

The discovery – dubbed by one physicist not involved in the work as the “physics news of the year” – is further evidence that α may not be constant after all. If correct, the conclusion would violate a fundamental tenet of Einstein’s general theory of relativity. The nature of the asymmetry in α – dubbed the “Australian dipole” – could also point scientists towards a single unified theory of physics and shed further light on the nature of the universe.

A constant that varies?

The fine-structure constant, about 1/137, is a measure of the strength of the electromagnetic interaction and quantifies how electrons bind within atoms and molecules. It is a dimensionless number, which makes it even more fundamental than other constants such as the strength of gravity, the speed of light or the charge on the electron.

Despite being dubbed a constant, there are, however, good theoretical reasons why α might vary with space or time. A changing α could, for example, help solve the biggest mystery of physics – how to formulate a single unified theory that describes the four fundamental forces: gravity, electromagnetism, and the strong and weak nuclear forces. The leading contender for a unified theory, for example, requires extra spatial dimensions beyond our familiar three – and the existence of extra dimensions could be inferred from changes in α.

In 1998 John Webb, Victor Flambaum and colleagues at the University of New South Wales began looking for evidence of variations in α by studying light coming from distant quasars. Radiation from these extremely bright objects has travelled for billions of years before reaching Earth and will have passed through ancient clouds of gas along the way. Some of the light is absorbed at specific wavelengths that reveal the chemical composition of the cloud. Within the absorption spectrum is the eponymous “fine structure” from which the value of α can be extracted.

The team has so far studied hundreds of quasars in the northern sky and concluded that billions of years ago α was about one part in 100,000 smaller than it is today. This, however, remains a controversial result that is not accepted by all physicists.

Surprise in the southern sky

Now, Webb and colleagues have analysed 153 additional quasars in the southern sky using the Very Large Telescope (VLT) in Chile and have made an even more startling discovery. They found that in the southern sky, α was about one part in 100,000 larger 10 billion years ago than it is today. The value in the northern sky was still smaller, as found before.

This asymmetry in the two hemispheres – dubbed the “Australian dipole” by the researchers – has a statistical significance of about four sigma. This means that there is only a one in 15,000 chance that it is a random event.

This spatial variation in α is further evidence that the electromagnetic interaction violates Einstein’s equivalence principle – one of the cornerstones of relativity that says that α must be the same wherever and whenever it is measured. Such a violation is good news for those seeking unification because many leading theories also go against the equivalence principle.

Big breakthrough?

Wim Ubachs, a spectroscopist at the Free University of Amsterdam in the Netherlands, described the finding as “the news of the year in physics”, adding that the result both backs up previous findings and gives “a new twist to the problem”.

The fine structure and other fundamental constants determine the masses and binding energies of elementary particles – including dark matter. If these constants vary, the relative abundances of normal matter, dark matter and dark energy could be different in different parts of the universe. This could be seen as an additional anisotropy in the cosmic microwave background or as an asymmetry in the rate of expansion of the universe.

Perhaps the most intriguing aspect of the finding is with regards to the anthropic principle, which points out that we owe our very existence to the fact that the fundamental constants have values that allow matter and energy to form stars, planets and ultimately our own bodies. If α varies throughout space and time, it is possible that we owe our existence to a special place and time in the universe.

A paper describing the results has been submitted to Physical Review Letters.

In a separate preprint, Flambaum and UNSW colleague Julian Berengut argue that the Australian dipole is consistent with other measurements of the variation of α. In 2008, for example, studies with an atomic clock at the National Institute of Standards and Technology in the US suggested that α is constant to within about one part in 1017 in the course of a year. During that time, Earth moved a certain distance along the dipole, and Flambaum and Berengut calculate that this should have changed α by about one part in 1018 – well within the NIST limit.

The Sun’s magnetic field warps its environment

 

The Sun’s extended magnetic field provides a vital shield for astronauts; without it they would be left exposed to potentially deadly cosmic rays entering in from outside the solar system. Now, a group of researchers in the UK and the US offers an explanation of how this protective field is generated and sustained by violent processes at the surface of the Sun. The findings provide another insight into the solar magnetic field – an incredibly complicated physical system.

Like the Earth, the turbulent motion of the Sun’s interior generates a large-scale magnetic field whose main component is a dipole. But whereas the Earth’s dipole field reverses its polarity roughly once every million years, the Sun’s field is far more dynamic, with its north and south poles flipping roughly every 11 years.

The presence of the Sun’s magnetic field also creates the heliosphere, an immense bubble-like structure surrounding the Sun. The heliosphere is controlled and maintained by the solar wind, which emerges as a constant stream of charged particles from the Sun’s upper atmosphere. Magnetic flux is also dragged into the heliosphere with the solar wind, creating what astrophysicists refer to as the Sun’s “open” magnetic field.

Pattern-searching

For more than 50 years spacecraft have been able to directly observe the open magnetic field, enabling solar physicists to search for patterns in its variation. Researchers have been looking, in particular, for a link between the changing magnetic flux and the 11-year solar cycle. Over the course of the solar cycle, the amount of radiation emitted by the Sun varies from a quiet period to a spell of increased activity, at the height of which the Sun’s magnetic field is observed to reverse its polarity.

Now, a team led by Mathew Owens at the University of Reading in the UK has taken a step towards this goal by establishing a link between the emerging magnetic flux and the prevailing conditions at the surface of the Sun. They approached the problem by combining a model of the corona with land- and space-based observations of the heliosphere collected over the past solar cycle by missions such as the Solar and Heliospheric Observatory (SOHO).

Owens’ team discovered that the rate at which flux is lost from the corona seems to be regulated by how “clean” the magnetic divide is between the north and south sides of the heliosphere. Where the divide becomes warped it leads to more flux being dragged out into the heliosphere. “Most novel in this paper is that they are taking into consideration how the three-dimensional global morphology of the solar wind structure affects variation of solar wind magnetic field strength,” says Sarah Gibson, a researcher at the National Center for Atmospheric Research (NCAR) in Colorado.

No clear link with sunspots

The research does not, however, link conditions in the heliosphere with sunspots, which are regions on the Sun’s surface where magnetic field has emerged in large bundles. Sunspots are most common during the Sun’s active period when it is at its most intense and they often lead to solar flares that can be a potential hazard to communications on Earth. There is still much debate in the scientific community about why the recent quiet spell in solar activity, which ended in the past year or so, was roughly two years longer than usual.

Owens believes that we are headed for a generally quieter Sun over the next solar cycle with fewer magnetic storms, reducing the hazard to communication. On the downside, however, there will be less magnetic flux available to replenish the heliosphere, giving astronauts and space-based equipment a reduced shield from galactic cosmic rays. “So while there will probably be fewer large solar-driven events, there will likely be a higher constant ‘dose’ of radiation from outside,” Owens tells physicsworld.com.

In the short term, Owens’ group will look at more examples of previous solar cycles, which will require some reconstruction of historic datasets. “Ultimately, the goal is to figure out how the internal plasma circulations, the photospheric features and the upper solar atmosphere observations all fit together over the huge range of spatial and temporal time scales involved,” says Owens. “That should keep us busy for some time.”

This research has been submitted to the Journal of Geophysical Research.

God and the god particle

By Hamish Johnston

“Can we see the reflection of God in the laws of physics?”

That was one of the questions put to three physicists and a comedian by Ernie Rea in his radio programme Beyond Belief, which aired earlier this week on BBC Radio 4.

Rea gathered Middlesex University physicist and imam Usama Hasan Durham University theologian, Methodist minister and former astrophysicist David Wilkinson and University of Manchester Higgs hunter Jeff Forshaw.

Representing atheists is the comedian Robin Ince, who has presented several programmes about science.

Rea himself is a Presbyterian minister from Belfast – but definitely not of the fire-and-brimstone variety. Indeed, his soothing brogue and gentle interviewing style are perfect for getting to the bottom of the subtle religious topics he covers every week.

“Does the Big Bang origin of the universe leave room for a religious view of creation?” asks Rea, who also wonders if physics has replaced God in some people’s lives?

Rea’s final question is “What is the one discovery that [the Large Hadron Collider] might make that would alter your perception of the universe?”.

You can listen to the programme here – the editing isn’t the greatest so you have to wait about a minute or so for the previous show to end.

In other religious news, Stephen Hawking has declared in his new book “It is not necessary to invoke God to light the blue touch paper and set the universe going”.

The book is called The Grand Design and is co-written by Caltech physicist Leonard Mlodinow.

The book will be published next week and you can read an excerpt in The Times – but you will have to pay.

Would Einstein be ruined by Twitter?

By James Dacey

I must admit that after long days spent in front of the computer screen researching stories, jumping from website to website, checking e-mails, etc, etc, I do sometimes find it hard to settle down in the evening and become fully absorbed in a good book. A real shame because this has always been one of my favourite pastimes and a great way to relax.

This was part of my motivation for going along to a talk last night about how the internet may be changing the way we read and think. The speaker was US writer Nicholas Carr, a long time critic of technological utopianism who caused a stir in 2008 with his article in The Atlantic, “Is Google Making Us Stupid?” Carr has since developed the arguments into his new book “The Shallows: How the Internet is changing the way we think, read and remember”, which he was describing last night at the Festival of Ideas in Bristol.

singh2.jpg
Nicholas Carr web sceptic

Carr’s main argument is that with the ever-increasing presence of the Internet in our daily lives we are losing the ability to think deeply and creatively, and to store things in our long-term memory. He believes that the control imposed by search engines and the constant availability of hyperlinks to whisk us away to other websites mean that the internet is starting to rewire our brains. “We have become obsessed with the medium, and the net is remaking people in its own image,” he argued.

It was certainly a fascinating talk and Carr argued his case well, particularly concerning the “plastic not elastic” nature of the brain and how the Internet lies in a rich history of “tools of the mind” including maps, the clock and the printing press. But too often he simply glossed over the positive aspects of the Web, such as the opportunities for online collaboration and the liberation of media now that anyone can comment or blog about their opinions. Then, of course, there is the obvious irony in the fact that many of us found out about his gig through the Festival of Ideas website.

Carr seemed to be harking back to a golden age when all of the great thinkers worked in isolation to formulate their brilliant ideas, aided only by their prolific reading of books. And it got me thinking about whether this was really true for the great scientific thinkers of the last century. Naturally my mind went to the greatest of them all, Albert Einstein, leading me to the question of whether Einstein would have let himself become distracted by the fruits of Web 2.0 such as Facebook and Twitter, and whether this would have had a negative effect on his brilliant mind.

(more…)

Three-year extension recommended for Tevatron

Fermilab’s Tevatron collider looks set for a new lease on life following a campaign to keep the facility running beyond the end of 2011. A report by Fermilab’s Physics Advisory Committee (PAC) due to be released later today is expected to recommend that collisions at the Tevatron continue until 2014 – long enough, say advocates, to search the entire range of likely masses for the Higgs boson, and perhaps even find the first evidence of its existence.

Pressure to keep the Tevatron running has been building for months thanks to a combination of factors, including the 15 month shutdown of the rival Large Hadron Collider (LHC) at CERN for repairs as well as a continued stream of results from the Tevatron’s two main detector experiments, CDF and D0. In late July, a group of 38 US physicists unaffiliated with Fermilab sent a letter to Energy Secretary Steve Chu, asking the Department of Energy (DOE) to support an extension. “We feel there’s a lot to be gained from running the Tevatron for a few more years,” says Harvard University theorist Lisa Randall, one of the letter’s signatories. “It is running incredibly well, and from a pure physics standpoint, it would be such a loss to stop now”.

Independent teams working at CDF and D0 have already collected enough data to be 95% confident that the Higgs mass does not lie between 158–175 GeV – a range near what CDF co-spokesperson Rob Roser termed the collider’s “energy sweet spot”. A three-year extension would produce enough data to broaden this search to the rest of the 114–185 GeV mass range that theorists deem most likely to contain the Higgs, and provide at least a 3-sigma level of confidence in any signal observed. Although not enough to qualify as a “discovery”, a 3-sigma signal would still constitute important “evidence” for the elusive Higgs, Roser says.

Keeping the Tevatron going would not be without costs, however. In addition to the estimated $60m per-year price of running the collider, an extension would also delay other Fermilab projects. One of the hardest hit would be the NOvA neutrino experiment, which is due to start in 2013. NOvA is designed to study neutrinos produced when protons from Fermilab’s main injector ring slam into a graphite target, but continuing operations at the Tevatron would leave fewer protons available for the NOvA beam. According to NOvA co-spokesperson Gary Feldman, the combined effects of reduced beam power and inconvenient shutdown periods associated with a three-year Tevatron extension would halve the amount of data NOvA could collect in its first three years of operation.

The recommendations of the PAC, which is composed of 14 external scientists and one Fermilab employee, are not binding and it will be up to the lab’s director, Pier Oddone, to determine an official lab response. One possible compromise would be to extend Tevatron operations for just one year. This would only delay NOvA by six months, thanks to overlapping upgrade and shutdown schedules, Fermilab deputy director Young-Kee Kim told physicsworld.com ahead of the PAC meeting on 27 August. A yearlong extension could also meet with DOE approval relatively quickly, she added, noting that the Tevatron received a similar one-year reprieve less than a year ago. “The whole community would like to see this extension happen, but we have to consider the impact on other projects,” Kim told physicsworld.com. “We will make somebody unhappy whatever we decide.”

Hubble’s greatest hits

What is your favourite image from the Hubble Space Telescope? The Hubble Deep Field? The Eagle Nebula? The Whirlpool Galaxy? Maybe a portrait of one of the planets? Undoubtedly, you will have seen dozens of Hubble images, because they are everywhere – newspapers, magazines, TV shows, the backgrounds of webpages, even on greetings cards. This proliferation of gorgeous pictures has propelled Hubble to an iconic status that no other scientific instrument can match. Just ask someone on the street to name a telescope. The answer will almost certainly be the Hubble Space Telescope, even though many other instruments have also helped make the last few decades a golden age of astronomy.

Hubble’s images are so captivating largely because of the telescope’s location above the Earth’s atmosphere. Placed in orbit 20 years ago by the Space Shuttle Discovery, Hubble circles the Earth once every 96 min at an altitude of about 560 km – not all that far above our heads. Compared with the distances of its cosmic targets, this means that the telescope is virtually scraping the Earth’s surface. However, this is still high enough to overcome the greatest obstacle to precise astronomical observations: Earth’s problematic atmosphere, which blocks most forms of electromagnetic radiation that astronomers would like to observe. Even visible light does not pass through unaltered. The view of space provided by a large ground-based telescope is somewhat blurry because atmospheric turbulence subtly refracts and deflects the paths of light beams by ever-shifting amounts. If uncorrected, these deflections smear out the images of distant stars, which would otherwise appear nearly point-like, so that they have an angular size of about an arcsecond, depending on atmospheric conditions. Aside from the effect we know as twinkling, the human eye cannot detect this blurring, but it seriously compromises astronomical studies that require the utmost sharpness of vision. From its perch in space, Hubble has a great advantage. It does not have to contend with the blurring effects of the atmosphere and so can consistently deliver magnificently detailed images.

Beyond the beauty, Hubble’s celebrity status has received an added boost from its dramatic history. In the years leading up to its launch in 1990, the Hubble Space Telescope was heavily hyped as the greatest advance in astronomy since Galileo first pointed a telescope at the heavens. To take advantage of the pristine view from space, NASA commissioned one of the most precisely ground mirrors ever fashioned to serve as Hubble’s primary optical element for gathering and focusing light. Immensely high expectations then gave way to deep despair after launch when the telescope could not be properly focused and the primary mirror was found to be at fault. The mirror’s surface was amazingly smooth and precise, but its shape was incorrect because the testing equipment used in the grinding process had been improperly assembled. As one US senator put it, the billions of dollars spent on Hubble had produced a “techno-turkey”. But one special feature of Hubble’s design offered hope. Unlike other telescopes in space, Hubble was built to be serviced by visiting astronauts. It was always part of NASA’s plan to upgrade Hubble periodically, adding newer, more capable instruments every few years during the course of the mission. And in 1993, astronauts on board the Space Shuttle Endeavour delivered and installed corrective optics that precisely compensated for the primary mirror’s disfigurement, thereby restoring the superior vision that has now produced an impressive gallery of images and reams of scientific discoveries.

Each subsequent servicing visit has left a newer, even more improved version of the telescope. The improvements have been so significant that it makes sense to group this retrospective of Hubble’s accomplishments into five different periods, each punctuated by a servicing mission that installed new instruments. Through this string of scientific smash hits the telescope has earned a prominent place in astronomical history, but at the beginning of the mission its success was not at all assured.

Struggling to survive (1990–1993)

If the first servicing mission had failed to correct Hubble’s vision, then the entire programme might have been cancelled. Yet despite the telescope’s abysmal popular reputation during its first three years, it was still managing to do valuable scientific work. Even with a flawed primary mirror, Hubble remained the finest telescope ever built for observing short-wavelength ultraviolet light, which cannot penetrate the Earth’s atmosphere. One of the key projects identified for Hubble prior to launch did not require excellent image quality, because it made use not of its imaging cameras, but of its spectrographs.

That key project was to study the gas between galaxies. Most of the gas in the universe is outside of galaxies, where it is extremely difficult to study. Intergalactic gas emits virtually no light, and most of what we know about it comes from spectroscopic studies of distant quasars. As light from a quasar journeys to the Earth it passes through numerous intergalactic gas clouds. Each cloud absorbs a little bit of light at a few specific wavelengths that depend on the composition and ionization state of the cloud. So when the quasar spectrum arrives at the Earth, it carries information similar to that in a geological core sample, telling us about the conditions in intergalactic space all the way along the pencil-thin line of sight back to the quasar.

Much of the information is encoded in the ultraviolet part of the spectrum because that is where atoms interact most strongly with electromagnetic radiation. Prior to the Hubble mission, studies of intergalactic gas were restricted to very distant clouds whose cosmological redshifts were great enough to shift the ultraviolet features of interest into the visible band, thus making them observable with ground-based telescopes. Hubble’s ultraviolet observations of quasars throughout its mission have therefore helped fill a giant gap in our understanding of intergalactic space, showing us how the conditions between galaxies have changed over the last few billion years of cosmic time.

Golden oldies (1993–1997)

Once the December 1993 servicing mission had replaced Hubble’s main camera and fitted the other instruments with corrective optics, the telescope was finally ready to make history. Its first big splash came a few months later, with an unusual comet known as Shoemaker–Levy 9 that was due to collide with Jupiter in July 1994. Tidal forces during a previous close encounter with Jupiter had already torn the comet to pieces, leaving a string of fragments that would plunge one by one into the planet’s cloudy surface. A collision of this magnitude happens in our solar system only once every few hundred years, and Hubble was repaired just in time to watch it unfold, providing the most detailed pictures available of the fragments’ impact sites (see “Comet impact site”). This event coincided with a rapid increase in public use of the Internet, and the impact images were among the earliest to be swapped on the Web.

Another landmark Hubble image released the following year showed the now-famous Eagle Nebula (see “Star formation”), its bulbous columns of glowing gas highlighting for the public the intricate detail with which the Hubble telescope can capture the beauty of space. In this star-forming region of our galaxy, gravitational forces within the densest, darkest parts of the cloudy columns are drawing gases together to produce new stars, and ultraviolet light from young, nearby stars is eroding the columns and causing them to glow. This is just one of the many popular Hubble images that illustrate the lifecycles of stars. For astronomers, seeing such images has been like looking up answers to their stellar-evolution homework in the back of the book. Many of Hubble’s findings about stars were anticipated by earlier models but received stunning confirmation with the vividly detailed images the telescope provides.

Perhaps the most historic image from this period is the Hubble Deep Field (see “Hubble Deep Field”), released in January 1996. At the time, it was the deepest view of space ever recorded. The location in the sky was chosen to be completely ordinary, so that it would be representative of the universe at large. The telescope pointed at this spot for 10 days solid, gathering as much light as possible and trying to detect the faintest and farthest galaxies it could possibly discern. Measuring the redshifts of these galaxies shows that light from the most distant ones in the image has taken more than 10 billion years to reach us. Most of cosmic history is therefore represented in this image, and it has been crucial to establishing how the overall formation rate of stars and galaxies in the universe has changed over time.

The youngest, most distant galaxies in the Hubble Deep Field image look smaller and far less settled than present-day galaxies. This indicates that galaxy collisions were common early in the history of the universe and supports the idea that galaxies grew hierarchically, as smaller objects merged to form bigger ones. This active period of merging and galaxy construction gradually tailed off as the universe continued to expand, and the slowdown in galaxy construction coincides with a slowdown in star birth. The rate of star formation in the universe appears to have topped out between two and four billion years after the Big Bang and now creeps along at less than a 10th of its maximum value.

Classic hits (1997–2002)

Instant discoveries were possible during the first few years after Hubble’s initial repair, as each new image revealed never-before-seen details in the sky. A second repair mission in 1997 further expanded Hubble’s capabilities with a new infrared camera and a new ultraviolet spectrograph. But some of the telescope’s more lasting scientific accomplishments were made with earlier equipment and simply required several years of observation and analysis to bear fruit.

One such study was another of the key projects identified for the mission before launch: a large programme to measure the universe’s expansion rate with unprecedented precision. The telescope’s namesake, Edwin Hubble, discovered the expansion of the universe in 1929 by demonstrating a proportional relationship between the distances of galaxies outside our local group and the speeds at which those galaxies are moving away from us. The quantity expressing the expansion rate, H0 = (recession speed)/(galaxy distance), has been dubbed the Hubble constant in his honour. Knowing its value is critical because it tells us the age of the universe (approximately 1/H0). It can also be used to derive the distances of galaxies (which are otherwise difficult to measure) from their recession speeds, which are easy to determine from the redshift of each galaxy’s spectrum.

However, in order to measure H0 precisely, one must somehow measure distances to a sample of galaxies without using this convenient relationship. The key project to measure H0 applied the same Cepheid-variable technique that Edwin Hubble used to make his original distance measurements to gauge galaxy distances up to 100 million light-years away. Cepheid variables are stars that pulsate in size and brightness at rates depending on their masses. They are ideal for measuring galactic distances because they are among the most luminous of stars and the variation period of a Cepheid is directly related to its total light output. Measuring both the apparent brightness and variation period of a Cepheid therefore provides enough information to determine the distance to the galaxy in which it resides. One simply infers the total light output of the Cepheid star from its pulsation period and calculates the distance at which a star with that light output would have the observed level of apparent brightness. Hubble is particularly suited to the job because its ultra-sharp vision allows astronomers to identify and measure individual Cepheid variables in galaxies that would be hopelessly blurry in ground-based images (see “The universe’s age and expansion rate”). In 2001 the key project team announced a measurement of H0 that was good to within 10%, pegging the universe’s age at about 14 billion years.

Meanwhile, astronomers taking advantage of Hubble’s distance-measuring capabilities to survey still greater distances made one of the most astonishing scientific discoveries of the 20th century. The discovery relied on observations of supernovae produced by exploding white-dwarf stars, which conveniently all have approximately the same peak light output. This property enables astronomers to determine a supernova’s distance by observing the maximum apparent brightnesses of white-dwarf supernovae. As with Cepheids, one knows the total light output and can calculate the distance at which such an object would have the observed apparent brightness. Using such distance measurements, in 1998 two separate teams of researchers showed that the expansion rate of the universe has been accelerating during the last several billion years, a result that shocked astronomers, who had expected the attractive force of gravity to be slowing the expansion rate. Some unknown form of energy, dubbed “dark energy”, is thought to be the impetus for the accelerating expansion, and efforts to measure dark energy and its equation of state are now a major theme of modern astronomy and cosmology.

Hubble’s signature contribution to measurements of the accelerating expansion was again its superior vision. Most of the supernovae used in these studies were discovered with ground-based telescopes. However, in a ground-based image it is difficult to distinguish the light output of a distant supernova from the light output of the galaxy that contains it. With Hubble, subtracting the galaxy’s light to determine the peak brightness of the supernova alone is far easier (see “Evidence for dark energy”), thus enabling much more accurate distance measurements. Furthermore, one of the new instruments installed in 1997 – the infrared camera (called NICMOS) – was able to find highly redshifted supernovae even more distant than those discovered from the ground, providing distance measurements that cemented the finding of accelerating expansion.

The other instrument installed in 1997, the STIS spectrograph, is most notable for helping to identify and measure the masses of supermassive black holes at the centres of nearby galaxies. Black holes themselves are invisible, so to prove that these exotic objects really exist, astronomers must measure their gravitational effects on stars and gas clouds in orbit around them. Orbital speeds are best measured spectroscopically, using the Doppler effect, and so from above the Earth’s atmosphere Hubble’s spectrographs were able to map out the orbital patterns at the centres of galaxies in more detail than ground-based telescopes. Measuring black-hole masses in this way has shown that almost all galaxies have black holes at their centres, and that the mass of a central black hole is directly related to the total mass of the stars making up the galaxy’s central bulge.

Staying power (2002–2009)

Hubble’s next major upgrade in 2002 added a more sensitive camera, the Advanced Camera for Surveys (ACS), which greatly expanded the telescope’s capacity for gathering deep images of the universe. One of its early tasks was observing the Hubble Ultra Deep Field, an even deeper look into the universe than the Hubble Deep Field from 1996. The Advanced Camera was also the workhorse of the Cosmic Evolution Survey (COSMOS), the largest sky survey done by Hubble, which captured images of more than two million galaxies in a two-square-degree swathe of sky.

Among the most important accomplishments of this survey was the mapping of dark matter associated with those galaxies. Most of the matter in the universe emits no light and is thought to be made of particles different from those found in normal atoms. The best method for mapping this dark matter relies on gravitational lensing – the bending of light beams as they pass near large clumps of matter. Hubble images of giant galaxy clusters, which can contain up to 1015 times the mass of the Sun, beautifully illustrate how the presence of a large mass distorts our view of the galaxies lying behind it (see “Gravitational lensing”). Lensing studies with Hubble, including the COSMOS survey, have helped to show that the distribution of dark matter in the universe closely agrees with predictions of structure-formation models that assume dark matter to be made of as-yet-undiscovered subatomic particles, which interact only through gravity and the weak force.

Closer to home, Hubble’s Advanced Camera has also been useful for observing planets around other stars. Monitoring of the bright star Fomalhaut, just 25 light-years away, has produced the first visible-light snapshot of a planet orbiting another star (see “Extrasolar planets”). Obtaining direct images of such extrasolar planets is a daunting challenge because a star generally outshines its planets by many orders of magnitude. Some of Hubble’s cameras are therefore equipped with coronagraphs that can block the light from a bright central star, like one might use a hand to block the glare of the Sun, to allow us to see planets or disks of gaseous debris that might be in orbit around it. The composite image shown here reveals a planet orbiting Fomalhaut at a distance 10 times Saturn’s orbital distance, and tracks the planet’s motion from 2004 to 2006.

During this same time period on Earth, enormous uncertainties plagued the Hubble mission. Many components on the spacecraft were aging beyond their design lifetimes, and the gyroscopes crucial for steadying the telescope’s orientation in space were failing. One final servicing mission had been planned to ensure continued operation of the telescope through 2010, but it was cancelled in 2004 by NASA chief Sean O’Keefe, in response to the loss of the Space Shuttle Columbia. In this case, Hubble’s celebrity status was its salvation. Vocal public response to the cancellation was decidedly negative, and in 2006 one of the early actions of O’Keefe’s successor Michael Griffin was to restore Hubble’s last upgrade.

The farewell tour (2009 onwards)

The final servicing mission ultimately happened in May 2009. By that time Hubble was severely crippled. Three of its main instruments – ACS, STIS and NICMOS – were dormant, and the telescope desperately needed new gyroscopes, batteries and a computer for science operations. However, astronauts on the Space Shuttle Discovery were able to replace all the required components, resuscitate ACS and STIS, and install two new instruments. One of the new instruments, the Cosmic Origins Spectrograph, is Hubble’s most sensitive spectrograph yet and will expand the ongoing programme of quasar spectroscopy to provide many more core samples of intergalactic gas. The other new instrument, the Wide Field Camera 3, is the telescope’s most sensitive camera and has already identified galaxies more distant than any previously found in Hubble’s deepest fields. And, of course, the newest camera is also providing more spectacular images to decorate the front pages of newspapers and magazines (see “Dying stars”).

If all goes well, the Hubble Space Telescope will continue to make landmark discoveries until its successor comes online. That should happen in 2014, when the James Webb Space Telescope finally makes it into orbit. With a primary mirror more than seven times greater in collecting area than Hubble’s, it should be able to detect galaxies even more distant than those found by Hubble and may even show some of the very first galaxies in the process of formation. Expectations are high, but the new telescope has a hard act to follow. Hubble’s extended string of smash hits will be difficult to beat.

Body talk

Earlier this summer I witnessed a showdown between two advanced optical-imaging technologies. It took place in Haverhill, Massachusetts, where the famous US clothing chain Brooks Brothers operates its Southwick manufacturing plant. The event was organized by Joseph Antista, Southwick’s director of training, who had two 3D-body-scanning companies set up their latest models. Being a fan of physics in the marketplace, I decided to attend.

The rivals were the Tailored Clothing Technology Corporation or [TC]2, which is the US market leader in body-scanning, and Human Solutions, a German firm that is the world market leader. The [TC]2 model stood to the left, a black, closet-like booth about 2 m high and about 1.2 m by 1.5 m. To the right was its challenger, consisting of three 3 m towers positioned in a triangle about 2 m on each side. Each device had been designed by a team that included several programmers and engineers and, as it happened, a physicist. Each required an hour to set up, but could be plugged straight into an electrical outlet. Each needed but a few seconds to scan a customer.

Fitting is a dying art, Antista told me, and hand measuring is time-consuming and expensive, which is why he was testing the two devices for possible use in Brooks Brothers’ 100 or so stores and numerous outlets. (Cyberware, a distant competitor based in Hollywood, caters to a more hi-tech niche.)

While Antista studied the data, I ran my own independent test.

Body image

I tried [TC]2 first. Its technology, dubbed “Structured Light Assisted Stereo 3-D Sensing”, uses 16 pairs of cameras that triangulate the positions of points of light projected onto the body. I undressed in a changing room, put on a neutral-coloured garment called “scanwear” and stepped inside.

A pleasant recorded female voice in_structed me to stand in place, grip the handholds and push a button. Lights flashed, projecting patterns on my body while the camera gains were optimized for my skin colour. Then the image-acquisition process began, with striped patterns used to “structure” the imaging to facilitate triangulation. While stereo sensing with white light is a well-known range-finding technique – it was used, for instance, on the Mars Rover, where a pair of cameras gauged distance – a lot of computing power is needed to calculate the locations of hundreds of thousands of positions on the body in a few seconds. Only recently have computers became fast and cheap enough to be used in body-scanning.

The experience was “quick, easy, safe, private”, just as the [TC]2 vice-president for technology development David Bruner assured me. The device has no moving parts. To use white light, Bruner explained, the technology was optimized for scanning human beings in upright poses. [TC]2‘s scanners are already found at numerous US universities and clothing stores, and at several UK institutions, including University College London Hospital, where it helps researchers to study obesity and children’s health issues.

Then I tried the Human Solutions device, which uses lasers rather than white light. Roy Wang, a Human Solutions representative who studied physics at the University of Toronto, explained that triangulation is easier with lasers because the measurements are more precise – though some markets, such as the US, unreasonably fear lasers for alleged safety reasons. Each of the three towers in the device has a laser and camera on a moveable mount. As I was scanned, the mounts slowly descended, projecting a horizontal red line that the cameras used to acquire my body’s cross-section.

Both technologies produced a data cloud of hundreds of thousands of points, sorted them into a 3D model, extracted key measurements and produced an “avatar” – an aesthetic, skin-covered visualization that could be used to simulate how clothes would look on it. Both machines had a comparable measurement accuracy. The [TC]2 machine is somewhat cheaper and optimized for clothing. The Human Solutions model is optimized for research purposes and can also measure bodies in a sitting position. It has found applications in ergonomic research, such as at NASA’s Houston facility, where it is used to map the bodies of astronauts in various positions.

Both machines can simulate moving avatars. “Women, being curvier than men, may want to see how belts and appliqués [decorations] will look in different positions while in motion,” Wang told me. “It’s not trivial!” He then put a belt and an appliqué on a simulated female avatar, and varied the properties of her clothing, such as the fabric’s elongation, compression and bending rigidity. The look changed dramatically. “Now watch this!” Wang tweaked the dials so that the simulated woman’s clothes caught on a tree branch, and we watched how that particular fabric reacted when snagged.

The critical point

In each case, the one negative moment of my experience was seeing a pitilessly precise 3D model of my near-naked body. My reaction was not untypical, Antista explained. “The technology’s too good!” he laughed. So to avoid the embarrassment of looking at one’s unclothed body, customers will be shown already dressed in their desired clothing.

I asked Antista if the 3D measurements were inferior to those of traditional fitters. “Their skill was not in the measuring,” he replied. “It was in knowing what would happen when you clothe a person. A measurement simply helps a fitter decide, ‘What am I going to put on her?’ A person is not a piece of wood. A person is alive.”

Antista picked up part of a suit that a seamstress was preparing. “See the fabric? It’s alive, too. It has seven layers of different fabric that react differently when it moves, and changes when it’s cleaned. A suit is also not a piece of wood.”

Whatever consumers may think of 3D body-imaging, one thing is clear: its entry into custom-made tailoring is another great example of how massive, cheap computing power is making hitherto hi-tech applications of physics more routine.

Confronting fraud in science

Scientists have a macabre fascination with fraud. Some of the most famous recent cases – including those of the physicists Victor Ninov and Jan Hendrick Schön, who falsified results in nuclear physics and nanotechnology, respectively – have remained hot topics for years, continually generating investigations, articles and invited talks at scientific conferences. Part of the reason for this fascination is the damage created by fraud, which not only pollutes the scientific sea, but also causes other scientists, including graduate students, to pursue research in erroneous directions. That cost in time, funding and hampered or even destroyed careers is not even calculable.

Another part of the fascination is that the typical scientist, when confronted with clear fraud, often remains in denial. As scientists, we train ourselves to detect what Irving Langmuir called “pathological science”, in which practitioners park their scientific method outside their laboratories and replace it with wishful thinking. We have learned to review articles and listen to presentations while carefully considering any possible over-interpretations, including those that stem from ignoring data points that do not fit the theory the author or speaker “believes” to be correct. Being human, we have all been guilty of pathological science to some degree; being scientists, we rely on colleagues to guide our way to new knowledge and understanding through discussions, reviews and reproducibility. Actual premeditated lying – such as pulling data points out of thin air – is, however, in a different realm and is more consistent with sociopathic behaviour. We generally have no defence against this, and as history has shown, we are slow to detect it.

David Goodstein’s book On Fact and Fraud: Cautionary Tales from the Front Lines of Science offers a short and engaging education for those who want to know more about understanding and detecting true fraud. The author is well known in the field of condensed-matter physics, being recognized both for his research and for his outstanding, indeed seminal, 1975 textbook States of Matter. In 1988 he became the vice-provost of the California Institute of Technology (Caltech), where he pursued issues of science and society, with a focus on scientific misconduct. Not only does he have vast experience, but he also co-developed and taught a course on scientific ethics for more than 10 years. Perhaps most importantly, while at Caltech he drafted one of the first formal policies on scientific misconduct. His thoughtful and groundbreaking procedures have since guided many of our great universities in dealing with such difficult matters.

After discussing how to detect pathological science, Goodstein presents a set of 15 precepts on how science works, including the charge that scientists must “bend over backwards” to report any way in which they might be wrong. He then succinctly describes our scientific “reward system” – a mechanism for promoting star performers that is closely linked to the scientific “authority structure”. Goodstein defines this structure as an 11-step ladder that starts with being admitted to a prestigious college and culminates in prizes such as named professorships, membership in the National Academy and a Nobel prize.

The recipe for success becomes less well defined at each level, as the gatekeepers, who are generally a few rungs higher, gain ever more power over the rewards. As Goodstein explains, this system has its roots in the 17th century with the first experimental physicist (Galileo) and the first scientific research laboratory (created by Sir Robert Boyle). Although this reward system has many virtues, one of its faults is that it tends to benefit those who are already successful and necessarily lends itself to self-promotion, including the over-interpretation of results. This is clearly a less serious violation than fraud, but it nevertheless gets in the way of good science.

A particularly ingenious way that Goodstein educates us about detecting misconduct is by describing a case where accusations of fraud have turned out to be false. Robert Millikan was first to determine the charge of a single electron with his ingenious oil-drop experiments, which won him the 1923 Nobel Prize for Physics. The accuracy of Millikan’s results was not doubted until 1984, when the scientific research society Sigma Xi published a booklet entitled Honor in Science. This booklet (compiled by Sigma Xi executive director C Ian Jackson) calls Millikan’s work “one of the best-known cases of cooking” and claims that Millikan selected the droplets that gave the “right answer”. Goodstein takes us through Millikan’s published paper and notebooks, and based on their evidence, he exonerates him of scientific misconduct and other distasteful behaviour. In doing so, Goodstein reminds us that any inquiry into scientific misconduct must investigate all sides of the story, and must itself apply the scientific method.

Goodstein also describes unambiguous fraud cases in some detail. Of these, the Schön case is the obvious poster child. As eloquently described in Eugenie Reich’s book Plastic Fantastic (see Physics World May 2009 pp24–29, print edition only), Schön did not extrapolate results such as single-molecule transistors as if he were a victim of pathological science, but instead clearly fabricated data. Goodstein asks “Did he believe he knew the right answer?”, but in fact, it is so obvious that Schön committed premeditated fraud that the answer to that question is irrelevant.

Cold fusion presents a less clear-cut case. Although Stanley Pons and Martin Fleischmann’s 1989 breakthrough-that-wasn’t is often believed to be fraudulent, Goodstein puts forth arguments that both men were victims of extreme pathological science. He cites the Mössbauer effect as another implausible nuclear phenomenon; if that can exist, he argues, why should it be “obvious” that cold fusion cannot? In fact, although the Mössbauer effect (in which atoms bound in a solid absorb and emit gamma-ray photons without any recoil) was considered surprising at the time of its discovery in 1957, it was reproduced quickly, explained theoretically and remains an important tool in condensed-matter physics today. In contrast, cold fusion has been vigorously investigated for more than 20 years but has not been reproduced, nor explained theoretically. As a 1989 report from the US Department of Energy stated, “Nuclear fusion of the type postulated would be inconsistent with current understanding and if verified, would require theory to be extended in an unexpected way.” Still, Goodstein provides a very interesting historical account, noting that there were many mistakes on all sides. Evidently, this case remains under debate.

In contrast to cold fusion, the phenomenon of high-temperature superconductivity (HTS) is described as something that seemed too good to be true but was not. From the discovery of superconductivity in 1911 until 1986, the critical temperature for its onset, Tc, increased only incrementally, from about 4 K to just less than 30 K. During that period there were many claims of HTS, all of which were easily shown to be erroneous; in cases where the “proof” was not as simple as demonstrating a measurement error, other scientists’ inability to reproduce the supposed effect was certainly enough.

Then, in 1986, Georg Bednorz and Alex Müller reported that they had measured a Tc of about 40 K in a LaBaCuO compound. Their discovery was first made known to most of the community at a Materials Research Society meeting in Boston, where two other independent and eminent scientists, Ching-Wu (Paul) Chu from Houston and Koichi Kitazawa from Tokyo, reported similar findings. Many of those present were convinced enough to repeat the experiments. Within weeks, the results were being reproduced in dozens of laboratories worldwide. The following January, a new compound with Tc ~ 90 K was announced; it too was widely reproduced within a month. Today, any new claim of HTS is met with well-equipped and capable laboratory scientists all over the world, so if the claim is not reproduced broadly and quickly, then it is not taken seriously. We have learned from this field that the most important diagnostic for determining scientific fact is reproducibility.

Goodstein concludes his book with a clear summary of what we have learned. Perhaps most importantly, he includes, as an appendix, Caltech’s pioneering policy on misconduct. This is the policy foundation of how to deal with scientific misconduct, already adopted by many universities, and it should be required reading for scientific researchers and administrators.

Several excellent books and articles have been written on the broad subject of fraud in science. In addition to Langmuir’s seminal articles and Reich’s book mentioned earlier, the books Flim-Flam by James Randi, Voodoo Science by Bob Park and the short article “How to judge flawed science” (2005 MRS Bulletin 30 75) by Ivan Schuller and Yvan Bruynseraede have been particularly helpful in educating our community. Since scientific fraud is not going away, we need greater understanding and education to help us detect and deal with it. David Goodstein’s book fulfils an important need. This is a valuable book and one not to be missed.

Knowledge from a vast effort

In climate science as in quantum mechanics, the observer is part of the system being observed. We can no more take ourselves out of the picture when gathering data and developing models of the Earth’s climate than we can eliminate the detector from a double-slit experiment. The fact that knowledge cannot exist on its own – we are all part of our own knowledge – means that sometimes, knowing how we know the things we know is as important as the knowledge itself. Such is the deeper message of Paul N Edwards’ new book A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming.

The range of subjects covered by A Vast Machine is indeed vast, involving fields as diverse as philosophy, history, computer systems, networks, climate models, politics and globalization. I found it refreshing to see how such a broad range of topics relate to each other, and the sheer scope of the book ensures that most physicists will find something to interest them. Edwards takes a novel view in explaining how various types of infrastructure – including systems for collecting data and sharing knowledge, which involve both technical and social networks – are essential for the design of climate models. As an information scientist at the University of Michigan in the US, he also exhibits a deep understanding of computers, programming, climate models, data and physics. His explanation of how satellite-borne instruments measure the atmosphere by interpreting measured radiances though vertical emission profiles is useful and clear, as is the discussion of how weather models incorporate observations to derive the best description of the current weather.

I particularly liked the way that Edwards deliberately breaks down old barriers by blurring the difference between data and models. One common misconception about climate science is that conclusions drawn from “data” are more accurate than predictions based on “models”. In fact, most observations involve some kind of model for interpreting measured signals in terms of atmospheric variables; so without models, there will be no data. However, Edwards also points out that most models involve data: the truth is that these two forms of knowledge are inseparable, and both play a vital role in our understanding of climate. Indeed, reading about the relationship between the two reminded me of a problem that appeared in one of my old physics exams, where we were asked to define a model equation for a thermistor based on data that described its response to different temperatures. This interconnected aspect of science is often forgotten, and I found it reassuring to read that such thinking is not dead.

The book is rich with details of the history of climate and meteorology. I found it fascinating to learn about legal issues with satellites that cropped up during the Cold War. For instance, after an American U-2 spy plane was shot down over Sverdlovsk in May 1960, the Soviet Union argued that photography from space – including weather photography such as that taken by the first weather satellite, TIROS – was illegal. The sheer vastness of the effort involved in developing weather forecasts, satellites, the observational network and other infrastructure should make most people humble, and Edwards’ description of it serves as an important reminder that our weather forecasts and knowledge of climate have not come without effort.

In the wake of recent allegations against the Intergovernmental Panel on Climate Change (IPCC) and the climate-research community, A Vast Machine is also quite a topical book, and reading it helps to put such allegations into their proper context. Although some climate scientists have been criticized for being unwilling to share data with their critics (as exposed in the – misnamed – “Climategate” controversy), in fact both the meteorology and climate-research communities have long traditions of openness and data sharing. Moreover, the often acrimonious public debate around climate change has produced at least one good result: climate science has been thoroughly scrutinized by various interest groups and is now one of the most carefully examined scientific disciplines.

One of the most recent developments in this tradition of openness has come from the World Meteorological Organization, which in 2009 launched an initiative for “Global climate services” that aims to provide “climate information for everyone”. Similar climate initiatives have, according to the book, happened several times over the last century in various forms. So I was left with the question: will they make further progress this time? In fact, reading about some of the previous initiatives was a revelation to me, and it gave me some insight and tacit knowledge about why sometimes there is not as much action and progress as one would expect – thanks to various types of interpersonal and inter-organizational “friction”, as well as diplomatic and political obstacles.

Edwards has clearly spent a fair bit of time within the climate-research community, and while this lends valuable authority to his story, it is also apparent that he has adopted some of the community’s misconceptions. One statement that I have encountered several times is the notion that the climate models are “physically consistent”. This phrase appears to be meaningless, as to date I have never seen it defined. It is also well known that all climate models are, strictly speaking, not physically consistent: water and energy budgets sometimes do not add up, while depending on the choice of parametrization packages, one can easily get different models with mutually inconsistent solutions. The parametrization schemes, also known as “model physics”, are really just statistical descriptions of unresolved small-scale processes such as clouds, of which we often do not have a clear understanding. Basically, models are not the real world, and since Edwards spends a great deal of space discussing this in an articulate way, it seems a bit ironic that he still cites this statement.

Aside from this, there are not many criticisms I can make about Edwards’ book, although there are some minor issues. The acronyms he uses are not always explicitly defined, for example, and the story he tells involves several convoluted and intertwined threads. In order to account for all these details, the book jumps back and forth in time, resulting in a chronology that sometimes is confounded by many parallel events, which may make it seem a little repetitive in parts. However, these faults are almost inevitable for a book that attempts to put together different pieces in a giant historical mosaic of events. So although A Vast Machine‘s enormous breadth and richness sometimes verge into too much detail and technicalities, for the most part the level of detail is the book’s main strength because it allows Edwards to summarize meticulous research in an organized way. As a result, I would recommend this book to everybody interested in weather forecasting or climate change. It would make a valuable addition to the syllabus of any course on climatology and meteorology, while the general physics community will also find it useful to help understand the context of the public debate around climate change.

Quantum computing…with a twist

In 1867 Lord Kelvin (then known as William Thomson) witnessed a demonstration of a machine that could produce smoke rings. Built by his friend and fellow physicist Peter Tait, the machine was much more than an interesting novelty. At the time, smoke rings were a hot topic in physics, thanks to work by Hermann von Helmholtz and, later, Kelvin himself. Helmholtz had shown that in a perfectly dissipationless fluid, the “lines of vorticity” around which fluid flows are conserved quantities. Hence, in such a fluid the vortex loop configurations, such as smoke rings, should persist for all time. Since scientists believed that the entire universe was filled with just such a perfect dissipationless fluid, known as the “luminiferous aether”, Kelvin suggested that different knotting configurations of vortex lines in the aether might correspond to different atoms (figure 1).

This theory of “vortex atoms” was appealing because it gave a reason for why atoms are discrete and immutable. For several years the theory was quite popular, attracting the interest of other great scientists such as Maxwell. However, after further research and failed attempts to extract predictions from it, the idea lost popularity. The theory of the vortex atom was finally killed altogether when Michelson and Morley (and later Einstein) showed that the aether does not exist.

Nonetheless, some of the theory’s proponents remained enthusiastic about it for quite some time, and perhaps its greatest supporter was Tait himself. Although initially quite sceptical, Tait eventually came to believe that by building a table of all possible knots, he would gain some insight into the periodic table of the elements. In a groundbreaking series of papers, he constructed a catalogue of all knots with up to seven crossings. (Note that mathematicians make a distinction between “knots” made from a single strand and “links” made of multiple strands. We will be sloppy and call them all knots.) Although the vortex-atom theory came to nothing, these studies made Tait the father of the mathematical theory of knots – which has since been a rich field of study in mathematics.

More recently, a remarkable connection between knot theory and certain quantum systems has emerged. This connection is now being explored both theoretically and experimentally, in part because of the promise it holds for quantum information processing. It turns out that although we cannot make atoms out of knotted strands of aether, it may be possible to make a quantum computer by dragging particles around each other to form particular types of space–time knots. And while such a “topological quantum computer” would be difficult to construct, it would offer some advantages over more conventional schemes for quantum information processing.

An invariably knotty problem

To understand how a topological quantum computer might work, we must first explore a mathematical concept called a “knot invariant”. During his attempt to build a “periodic table of knots”, Tait posed what has become perhaps the fundamental question in mathematical knot theory: how do you know if two knots are topologically equivalent or topologically different? In other words, can two knots be smoothly deformed into each other without cutting any of their strands? Although this is still considered to be a difficult mathematical problem, the knot invariant is a powerful tool that can help in solving it.

A knot invariant is defined as a mathematical algorithm that connects a picture of a knot – the input – to some output via a set of rules. The rules are chosen such that if two knots that are input are topologically equivalent, then applying the rules will always give the same output. Hence, if two outputs are different, one knows immediately that the two input knots were not topologically equivalent.

One of the simplest types of knot invariant is known as the Kauffman invariant, and it is defined by just two rules (figure 2a). Whenever we see two strands cross in our picture of a knot, we can use the first rule to replace our picture with a “sum” of two pictures, each with one fewer crossing than the original picture, and with a parameter A that acts as a kind of bookkeeping tool to keep track of how many right- versus left-handed crossings we have replaced. By repeatedly using this first rule, we can eventually reduce our picture to a sum of diagrams that have no crossings at all – they are just open loops. We then use the second rule to replace each open loop with the value d = –A2 –A–2, yielding a polynomial in A and A–1. An example of evaluating the Kauffman invariant is shown in figure 2b, where we find (after using some algebra) that the Kauffman invariant of the double figure-of-eight picture we started with is in fact the same as that of the open loop – which is what we should expect, given that they are topologically equivalent.

Figure 2c shows a slightly more complicated example, which reveals that the Kauffman invariant of a piece of twisted string is not the same as the invariant of the untwisted string – in fact, their invariants differ by a factor of –A3. This may seem to contradict the description above that the Kauffman invariant should be the same for any two knots that can be smoothly deformed into each other (since it appears we can smoothly deform the twisted string into the straight one). However, we should think of the strands not as being infinitely thin, but rather as having some width (figure 2d). In this case, it is easy to see that if we try to smoothly remove the twist by pulling the string straight, then we actually still end up with a twist in our strand.

Different paths, same outcome

To understand the connection between knot invariants and physics, we need to think about quantum mechanics in the manner pioneered by Richard Feynman in his path-integral approach to calculating the probability of a particular quantum event. Using Feynman’s method, the probability of getting from an initial to a final configuration can be calculated by summing up the probability amplitudes of all possible processes (or “paths”) that can happen between the initial and final states.

For some very special types of quantum systems, the amplitude of a particular process will depend only on the topology of that process, not on any precise details of the process such as how fast particles move or how far apart they are. Roughly speaking, this means that particles in the system can move around each other in many different ways – they can even be created out of the vacuum as particle–hole or particle–antiparticle pairs – but if the paths they trace out in space–time are topologically equivalent, then those paths will be equally probable. Systems that obey this rule are known as topological quantum systems, and the theories describing their behaviour are called topological quantum field theories (TQFTs).

This brings us to a rather remarkable conclusion: in a topological quantum system, the amplitude for a particular process is a knot invariant of the space–time paths traced out by the particles during that process. Do not worry if this connection seems less than obvious. The mathematical physicist Ed Witten, who was the first to make it, won the highest honour in mathematics – the Fields Medal – for his achievement. The essence of his insight is that a knot invariant is defined as an output that depends only on the topology of the knot input, and the amplitudes in a TQFT also depend only on the topology of the knot formed by the particles’ paths through space–time. So, the amplitudes must be knot invariants.

Before we proceed, it is worth emphasizing that when we talk about particle paths, we are actually considering world-lines – paths in both space and time. For example, if we have a 2D (flat) physical system, we must view time as the third dimension. Hence, a particle at rest will trace out a straight space–time world-line in two spatial dimensions and one time dimension, while two particles orbiting around each other in two dimensions will form a double helix or two-stranded braid in (2 + 1) dimensions. All the TQFTs that we will be interested in are indeed 2D (flat) systems where we can think of the third dimension as time.

To see an example of how the correspondence between amplitudes and knot invariants works in practice, let us return for a moment to figure 2d. In the language of the TQFT, we see that a particle that rotates around itself, creating a twisted path in space–time, accumulates a factor of –A3 compared with a particle that does not rotate around itself. This additional factor obtained by evaluating the Kauffman invariant is actually something we should expect if we think about amplitudes for world-lines of particles in a quantum-mechanical system. We know that when a particle rotates in quantum mechanics, it will generally pick up a phase due to its spin – and indeed in all physically realizable TQFTs, –A3 is just such a phase.

Multiparticle systems

Perhaps the most important property of these TQFTs occurs when there are multiple particles and holes created. In topological quantum systems composed of multiple particles, there are generally many different (orthogonal) wavefunctions with the same energy describing the state of the system. This is quite unusual since, for most quantum systems, once all locally measurable quantum numbers (position, spin and so on) are specified, then the wavefunction is uniquely defined. But for TQFTs there are additional hidden “topological” degrees of freedom – so there can be several wavefunctions that have exactly the same energy, and look exactly the same to all local measurements, but still represent different quantum states.

To see how this occurs, consider a system containing two identical particles and two identical holes (figure 3a). The crucial thing to realize is that this system can be prepared with (at least) two topologically distinct space–time histories. We will call these two (ket) states |1〉 and |2〉. Their corresponding (bra) states 〈1| and 〈2| are the same as |1〉 and |2〉 but with time reversed.

The next question we need to resolve is whether |1〉 and |2〉 are in fact different quantum states. To demonstrate that they are, we need to calculate their overlap amplitudes, such as 〈1|1〉 and 〈1|2〉, and check that these are different. In other words, we need to show that |1〉 and |2〉 are linearly independent (figure 3b). The procedure for doing this is quite obvious: we simply bring together the two corresponding space–time pictures to form a closed knot and then evaluate the Kauffman invariant of the result. Bringing together 〈1| with |1〉, or 〈2| with |2〉, generates two loops, resulting in a Kauffman invariant of d2 (in other words, 〈1|1〉 =  〈2|2〉 = d2). However, bringing together 〈1| with |2〉 generates only one loop, giving 〈1|2〉 = d. This immediately tells us that |1〉 and |2〉 must be different quantum states as long as |d| ≠ 1. It is also possible to show that any other, more complicated, way of preparing the configuration of two particles and two holes must be some linear combination of |1〉 and |2〉. We therefore conclude that our two-particle, two-hole system is precisely a two-state quantum system, or a single quantum bit (qubit).

We can use the same technique to calculate, for example, 〈1|braid|1〉 by evaluating the Kauffman invariant of a braided knot (figure 3c). In this case we conclude that the process of braiding particles around each other in space–time generally performs (unitary) transformations on our two-state quantum systems. In other words, we are manipulating the quantum state of our qubit by braiding particles around each other.

A quantum computer in theory…

Being able to use particle paths to perform mathematical operations raises the interesting possibility of using topological quantum systems as a quantum computer. This idea, known as “topological quantum computation”, is generally credited to Michael Freedman (another Fields medallist) and Alexei Kitaev. One implementation of the general scheme is illustrated in figure 4. In this three-qubit scheme, quartets of particles are pulled from the vacuum, with each four particles representing a single qubit of information. In the parlance of quantum computation, this is called the “initialization” phase. A quantum computation is achieved by dragging the particles around each other to form a particular knot, with different types of braids corresponding to different quantum computations. Finally, one makes a measurement at the end by, say, attempting to annihilate the pairs of particles. Those particles that do not annihilate (because their amplitude for annihilation is zero as calculated by the Kauffman invariant) make up the “readout” of the computation. Of course, one could predict the outcome of this experiment by calculating the Kauffman invariant of the knot using the rules laid out in figure 2a. However, it is quite clear that applying these recursive rules becomes impossibly complicated for all but the simplest knots, whereas a physical topological quantum system can automatically perform this calculation for very complicated knots with no trouble.

While this topological quantum computation may sound like an extremely complicated way to achieve the already difficult goal of building a quantum computer, if one starts with the right topological quantum system, then it turns out to be just as capable of performing standard quantum-computational tasks (such as Shor’s algorithm for finding prime factors of large integers) as any other approach to building a quantum computer, at least in principle. Indeed, it actually has one very important advantage – again, in principle – over other schemes that have been proposed.

The advantage of a topological quantum computer stems from the fact that one of the main difficulties with building a quantum computer is finding a way to protect it from small errors, and particularly to protect it from noise and other factors that couple to the environment. In a topological computer, if noise hits the device in the middle of a computation, a particle may be shaken around a bit (figure 4). However, as long as the overall topology of the braid is unchanged, then the computation being performed is also unchanged and no error occurs. In this way, a topological quantum computer is naturally protected from errors – a rather substantial advantage.

…and in practice?

This advantage would, however, be pointless if there were no real, physical systems that obey the rules of TQFTs. But while quantum systems that are described by knot invariants in this way may sound rather exotic, in fact a few such systems are believed to exist. One class of systems are 2D p-wave superfluids, including Sr2RuO4 superconducting films, helium-3A superfluid films and the so-called ν = 5/2 and ν = 7/2 quantum Hall systems. Closely related are various (yet to be realized) proposals for creating superconducting structures on the surface of topological insulators and other systems with strong spin–orbit coupling. The experimentally observed ν = 12/5 quantum Hall system is another example thought to be a particularly interesting class of topological quantum system. Finally, there have been many proposals to realize such systems in ultra-cold atomic lattices or in ultra-cold rotating bosons.

Although TQFTs may end up being applicable for any one of a number of physical systems, the strongest candidates for a real-life topological quantum system are the above-mentioned ν = 5/2 and 7/2 quantum Hall states. Quantum Hall states can form when electrons are confined to two dimensions, exposed to large magnetic fields and cooled to very low temperatures. Under such conditions, the ratio of the density of electrons to the applied magnetic field, ν, takes certain simple values such as 1, 2, 1/3, 2/5 and so forth. When the ratio is a fraction, the effect is known as the fractional quantum Hall effect. Electrons in quantum Hall states (fractional or otherwise) can flow with no dissipation – a situation analogous to the flow in superfluids and superconductors. Crucially, when these quantum Hall states form, the individual electrons form a uniform-density quantum fluid and one need only keep track of the low-energy “quasiparticles” – quantized lumps of charge density in an otherwise uniform fluid. Since all charges have a tendency to move in orbits in a magnetic field, the quasiparticles can be thought of as vortices in a perfectly dissipationless fluid – which, just as Kelvin predicted, are stable configurations of fluid flow – and it is these quasiparticles that are expected to obey the braiding physics of a TQFT.

One reason to be optimistic about these systems is that they have an amazing property: electrical measurements made on them do not depend on certain details of the experiment. In particular, their longitudinal resistance is always zero, while their Hall resistance (figure 5) is always independent of the shape of the sample and how the electrical contacts are positioned. This independence of detailed geometry is a strong hint that these systems are described by a TQFT. Some preliminary (albeit currently controversial) experimental evidence that these states of matter really are non-trivial TQFTs has been recently reported by Robert Willett and colleagues (arXiv:0911.0345). If their work is convincingly verified, then it would provide proof that the physics of knot theory is being realized in quantum systems. This would be an amazing development from the perspective of fundamental physics – but it would also open the door to applying the physics of knots to building a topological quantum computer.

At a glance: Topological quantum computing

  • The first major mathematical studies of knots began after 19th-century physicists suggested that atoms could be made from knotted strands of vortices in the “luminiferous aether”
  • Knot invariants can help determine whether two knots are “topologically equivalent”, meaning they can be deformed into each other smoothly and without cutting any strands
  • In some very special quantum systems, the probability that a particular process will occur depends entirely on the system’s topology. In such cases, the probability amplitude is equivalent to the knot invariant of the space–time paths traced out by particles during the process
  • It is possible to manipulate the quantum state of such systems by dragging particles around each other to create certain types of knotted patterns in space–time
  • Computations performed in this way would be less vulnerable to noise than other types of quantum computers because the result of a computation depends only on the topology of the knot, not the paths of the particles that formed it

More about: Topological quantum computing

N E Bonesteel et al. 2005 Braid topologies for quantum computation Phys. Rev. Lett. 95 140503
M H Freedman, M J Larsen and Z Wang 2002 A modular functor which is universal for quantum computation Commun. Math. Phys. 227 605
L H Kauffman 2001 Knots and Physics 3rd edn (New York, World Scientific)
A Kitaev 2003 Fault-tolerant quantum computation by anyons Ann. Phys., NY 303 2
A Kitaev 2006 Anyons in an exactly solved model and beyond Ann. Phys., NY 321 2
C Nayak et al. 2008 Non-Abelian anyons and topological quantum computation Rev. Mod. Phys. 80 1083

Copyright © 2026 by IOP Publishing Ltd and individual contributors