Skip to main content

GLAST goes for blast-off

The universe appears remarkably varied when viewed at different wavelengths. The eye’s sensitivity to the visible region of the electromagnetic spectrum allows us to resolve thousands of stars, individual planets moving slowly across the sky, and even the faint diffuse glow from our own galaxy. Almost all of these photons are produced by hot, glowing objects, and by tuning instruments to higher-frequency electromagnetic radiation such as ultraviolet light and X-rays, we can explore hotter regions of the universe.

But at the highest electromagnetic frequencies — corresponding to gamma rays — the universe starts to looks more bizarre. This is because gamma rays are produced not by thermal processes but by collisions between relativistic charged particles and matter or light. Such collisions accelerate or disrupt the particles, thereby causing them to emit high-frequency photons that provide a glimpse of the most extreme astrophysical processes known.

Our understanding of these extreme environments is about to take a giant step forward. This month, NASA is set to launch its Gamma-ray Large Area Space Telescope (GLAST), a four-tonne observatory packed with state-of-the-art particle detectors that will reveal the gamma-ray universe in much richer detail. There are good reasons to expect great things.

The gamma-ray sky

Gamma-ray astronomy can be traced back 50 years to a paper by the late US theoretical astrophysicist Philip Morrison, who argued that most of the visible light upon which traditional astronomy had been based is actually “secondary” emission. For example, the Sun is powered by fusion reactions deep in its core, but the optical radiation that we see is produced not by the primary nuclear reactions themselves but by material that has been heated by them. Because these primary processes tend to occur at energies above a few mega-electron-volts, gamma-ray emission can provide a more direct indication of the underlying astrophysical processes.

It proved harder than anticipated to observe gamma rays; but in the past two decades, gamma-ray astrophysics has produced some amazing results — including the discovery of distant quasars that radiate most of their energy in the gamma-ray range. Gamma rays have a high probability of interacting with matter, which allows us to build instruments that can detect them, but these same instruments are also sensitive to large numbers of charged particles, which adds substantial background noise. Making the observational task even more difficult, gamma rays have such high energies and therefore short wavelengths that we cannot collect and focus them in the way a conventional telescope does with optical radiation. Instead, we require large and heavy instruments in order to contain the energy of the gamma rays.

The first astrophysical gamma rays were detected in the late 1960s by the Orbiting Solar Observatory (OSO-3), which observed a strong emission from the galactic plane in addition to a diffuse signal that filled the sky. In the 1970s the SAS-2 and COS-B satellite instruments, which exploited the same detection technique used in spark chambers, detected dozens of point sources of high-energy gamma rays. Then, in the 1990s, NASA’s EGRET experiment, which was operational from 1991 to 2000 on board the Compton Gamma-Ray Observatory (CGRO), revolutionized “GeV astrophysics” with the detection of several hundred gamma-ray sources.

We now know that the universe contains a rich variety of gamma-ray emitters, including pulsars, supernova remnants, gamma-ray bursts and supermassive black holes that are 106–1010 times more massive than the Sun. Even the Sun produces gamma rays by accelerating charged particles in solar flares and coronal mass ejections, while our galaxy glows brilliantly with gamma rays due to interactions of high-energy cosmic rays with interstellar gas. One of the key reasons to extend our observations of celestial gamma rays is to look for signatures of as-yet-unknown fundamental physical processes.

As gamma rays cannot penetrate even a few kilometres through the upper atmosphere, GLAST will be ideally placed to reveal the information carried by cosmic gamma rays. However, it is possible to do gamma-ray astronomy from the ground, too. When high-energy gamma rays strike matter in the atmosphere, they turn into electron–positron pairs that lose energy by emitting secondary gamma rays. This process quickly produces a “shower” of electromagnetic particles, and, provided that the energy of the initial cosmic gamma ray is greater than about 100 GeV, generates so much light that ground-based detectors — such as MAGIC in the Canaries, HESS in Namibia, VERITAS in Arizona, CANGRAOO in Australia and Milagro in New Mexico — can record them as well as the remnants of the showers.

Below energies of about 100 GeV, however, detectors must be placed above the atmosphere — and it is these energies that are of particular interest to GLAST scientists. Since the universe is transparent to gamma rays with energies below about 10 GeV, these photons allow us to study the universe at enormous distances.

In 1992 researchers at the Stanford Linear Accelerator Center and Stanford University in the US realized that advances in solid-state technology, coupled with the industry capability that had developed in anticipation of the Superconducting Super Collider, meant that they could produce an outstanding high-energy gamma-ray telescope. Following a successful NASA study showing that a large silicon-strip detector could be built for spaceflight, the mission — backed by strong support from the astrophysics community — soon became a high priority. Then, in 2000, GLAST was ranked as the highest priority mid-sized mission in a 10-year review carried out by the US National Academy of Sciences — a recommendation that boosted the telescope to the front of NASA’s queue (see Physics World April 2000 p7 print version only).

GLAST’s primary instrument, the Large Area Telescope (LAT), will cover the energy band between 20 MeV and at least 300 GeV. A second instrument, the GLAST Burst Monitor (GBM), will detect transient sources such gamma-ray bursts and solar flares down to energies of just 8 keV. In particular, GLAST will probe gamma rays in the 10–100 GeV range — a region of the electromagnetic spectrum that is largely unexplored due to limitations in the sensitivity of previous space-based gamma-ray observatories. Furthermore, the energy coverage of ground-based and space-based telescopes will overlap, thus allowing these two types of observatories to work directly together for the first time in order to cover the entire high-energy gamma-ray spectrum.

A laboratory for cosmic science

GLAST should help us to determine how much energy extreme astrophysical sources produce, and therefore tell us about the acceleration mechanisms that generate such high-energy particles. For example, pulsars, which are fast-rotating magnetic neutron stars that emit beams of radio waves from their poles, also emit lots of gamma rays. GLAST will reveal the distribution of gamma-ray energies from these ultra-dense objects, which will tell us about the geometry of the magnetic fields present and about the location of the acceleration sites. Since the large magnetic field near the surface of a pulsar can cause gamma-ray photons to be converted into electron–positron pairs, the pulsar spectra from GLAST will tell us about what the magnetic fields are like around the regions where gamma rays are emitted.

Gamma rays will also tell us about novel particle-acceleration mechanisms that are far more powerful than anything seen on Earth. Observations of supernova remnants suggest that particles can be accelerated to enormous energies by shocks produced as the blast from an exploding star ploughs into the interstellar medium. While the existence of such shocks is well established, the way in which the particles are accelerated to extreme relativistic energies is much less well understood.

GLAST’s two instruments will also provide us with the complete high-energy spectra of gamma-ray bursts, from a few kilo-electron-volts to hundreds of giga-electron-volts. These bright but distant flashes of gamma rays, which occur at a rate of about one per day, briefly shine as the brightest objects in the universe — yet the total energy released and the nature of the high-energy spectrum have never before been measured. Similarly extreme energies are produced in active galaxies when matter is accelerated to relativistic energies in a jet powered by a supermassive black hole, emitting gamma rays with a power equivalent to that of all the stars in an entire galaxy over all wavelengths. Until now, gamma-ray detectors have not been able to measure these highly variable emissions in any detail over long timescales, but GLAST will allow us to see into these jets, thus revealing their contents and dynamics.

In addition to studying these astrophysical processes, the extreme distances and energies probed by GLAST will allow us to investigate several areas in fundamental physics. One such opportunity is provided by the diffuse gamma-ray cosmic background — a haze of giga-electron-volt gamma rays that theorists currently attribute to cascades from distant tera-electron-volt gamma-ray sources, ultrahigh-energy cosmic rays, and even Hawking radiation emitted by primordial black holes, among other speculative ideas (in addition to the more prosaic theory that they originate from unresolved populations of giga-electron-volt gamma-ray sources). Thanks to its sensitivity to gamma rays with energies less than about 10 GeV, which travel across the universe unhindered because they do not lose energy by interacting with the infrared, optical and ultraviolet photons emitted by stars, GLAST will allow theorists to constrain explanations of the gamma-ray background.

The second area of basic physics that GLAST will let us study concerns one of the most fundamental questions in cosmology: the origin and distribution of dark matter. An important class of theories predicts the existence of weakly interacting massive particles, or WIMPs. In most models, WIMPs may annihilate in pairs, thus producing high-energy particles, including gamma rays. GLAST could be capable of detecting this radiation from annihilation events in the galactic halo, providing unique information about dark matter.

High expectations

Our expectations for GLAST are extremely high, based on what we have learned from previous space missions in gamma-ray astronomy. The predecessor to GLAST — NASA’s EGRET experiment — detected 271 gamma-ray sources, including over 70 active galaxies and six pulsars, thereby greatly advancing our understanding of the high-energy gamma-ray sky. But fewer than half of the EGRET sources have been definitively associated with known objects, which leaves many intriguing puzzles for GLAST to try to solve.

GLAST is designed to operate for five years and may last at least 10, complementing a new generation of ground-based detectors that have come online in the past few years. With so much new information about the universe to be revealed, and in keeping with NASA policy, all GLAST data will be publicly available. The analysis tools will also be made public, thus helping to maximize the scientific return of the mission.

We expect GLAST to have a large impact on many areas of astrophysics. Some of these have been outlined here, but what is most exciting are the surprises: with any luck, the greatest GLAST science that has not even been thought of yet.

A detector in space

GLAST, which was designed and built by an international collaboration of high-energy physicists and astrophysicists, comprises two detectors: the Large Area Telescope (LAT) and the GLAST Burst Monitor (GBM). In the former, which is a solid-state improvement on the spark-chamber technology employed by the previous EGRET detector on board NASA’s Compton Gamma-ray Observatory, gamma rays will slam into a layer of high-density material and produce an electron–positron pair. The direction of the ray can be determined by tracking the trajectories of the electron and positron using high-precision silicon-strip detectors. A separate detector, the calorimeter, counts the particles produced in the subsequent electromagnetic shower, thereby providing a measure of the initial gamma-ray energy. Surrounding the tracking detector is a layer of charged-particle sensors, the signals from which will enable gamma rays to be distinguished from the much larger “rain” of charged particles encountered in orbit. In addition to being able to detect gamma rays in the range 20 MeV–300 GeV, the LAT also has a very wide field of view: it captures about 20% of the gamma-ray sky at any time, and can survey the whole sky every three hours (two orbits). This is particularly important for observing the highly variable gamma-ray universe.

GLAST’s other main instrument, the GBM, has such a large field of view that it will be able to detect gamma-ray bursts from over two-thirds of the sky at any one time, so identifying locations for follow-up observations by the LAT. The GBM is composed of two sets of detectors: a dozen sodium-iodide sensors and two cylindrical bismuth-germanate sensors. When gamma rays interact with these crystalline detectors, they produce flashes of visible light, and the relative counting rates in the separate detectors can be used to infer the location of gamma-ray bursts across the sky. The GBM works at a lower energy range than the LAT — it can detect gamma rays down to about 8 keV — which means that together the two instruments provide one of the widest energy-detection ranges of any satellite ever built.

Mind the hack

Late in the afternoon of Monday 19 June 2000, the US news website NASA Watch reported details of a classified briefing from NASA to the White House that concerned a potential landmark discovery in planetary science. Within 24 hours the news had spread onto other small websites, and after two days several of the world’s biggest media outlets, including the BBC and USA Today, were running the story on their homepages. Although the details were sketchy, the bottom line was clear: pictures from the Mars Global Surveyor spacecraft implied that the red planet had water on its surface.

The problem was that the stories appeared a little over a week early. In the eyes of the journal Science, in which the research was to be published, and NASA, which was to hold a press conference to announce the findings, NASA Watch had broken what is known as a science embargo — a system that aims to coordinate when the media report on new research.

As a result of the broken embargo, NASA decided to bring forward the press conference, while Science swiftly published the paper online to clarify the “varying degrees of accuracy” of the premature stories. But are the public and other scientists entitled to know about discoveries like this as soon as possible? Or is it right that research is embargoed, so that all journalists have an equal chance to report on it accurately?

Age-old system

Science embargoes date back to the emergence of dedicated science journalism in the 1920s. Around this time, the science editors of newspapers and news agencies began to demand advance copies of research papers in order to get to grips with difficult concepts and so prepare accurate stories. Publishers relented, and by the 1950s most elite journals were distributing preprints under embargo; in other words, stipulating that the editors should not run stories before the research papers were actually published.

Today, the two biggest journals employing embargo systems are Nature, published by Macmillan Publishers in the UK, and Science, published by the American Association for the Advancement of Science (AAAS). Both journals send preprints and summary press releases of upcoming research papers to journalists a little under a week in advance of publication; Science, for example, e-mails about 5400 journalists worldwide every week. In addition, media agencies such as EurekAlert!, also run by the AAAS, and AlphaGalileo, its European competitor, flood journalists’ inboxes daily with embargoed press releases from other journals and research institutions.

I don’t see why research that is being funded by the taxpayer should be manipulated by a private company for its commercial benefit David Whitehouse, freelance science writer

There is no legal obligation for journalists to adhere to embargoes. For this reason, the system is often referred to as a “gentleman’s agreement” in that it frees journalists from the pressure of tight deadlines while effectively guaranteeing journal publishers a spread of well-crafted news articles bearing the journal’s name. “I think [the embargo system] helps the media with complicated research,” says Ruth Francis, head of press relations at Nature. “It allows them time to gather information, speak to the researchers and know that they’ve got the story right.”

Not everyone agrees with this view. Some insist that the arbitrary timing of news coverage as dictated by embargoes gives the public the false impression that science is a progression of breakthroughs. Others object to the fact that embargoes ensure maximum coverage for a journal, because they allow slower reporters the time to catch up with those who are more able. “I mostly hate them,” says Dennis Overbye, science editor at the New York Times. “I think they serve the purpose of the journals very well. I’d just be happy to see them abolished.”

Media dissent

Given that embargoes are merely a gentleman’s agreement, where are the journalists who want to evade them? One journalist who prefers not to be the thrall of press officers is David Whitehouse, a freelance science writer who used to be science editor of BBC News Online. In fact, while Whitehouse was at the BBC he wrote one of the embargo-breaking news stories about water on Mars. “Yeah, we got in a bit of trouble about that,” he recalls.

Like other journalists who got wind of the Mars discovery early, Whitehouse was not able to put together an entirely accurate article. For example, he wrote that the images revealed “what appears to be brackish water seeping from beneath the Martian surface”, when in fact the images only showed the remnants of spring activity that could have been two million years old. (To this day there is still no consensus on the existence of surface water on Mars.) This error could be seen as a reason to favour embargoes, but it more likely highlights an awkward caveat of many embargo systems known as the Ingelfinger rule.

The rule, devised in 1969 by Franz Ingelfinger, then editor of the New England Journal of Medicine, states that scientists with papers pending publication must not discuss their work with journalists — essentially to prevent other scientists learning about the research from prior news stories rather than the journal. Nature and Science both employ variants of the Ingelfinger rule and only permit scientists to talk openly with journalists once an advance press release has been issued. This means that, even though Whitehouse and others sourced the information about the Mars discovery without the help of press officers — and so technically were free from the authority of Science’s embargo — they were unable to check facts with the scientists involved.

But Whitehouse says that sometimes he considered breaking embargoes even after he had received the Nature or Science preprint and press release. “You wouldn’t normally break an embargo that you’d agreed to,” he explains. “But if it was a humdinger of a story, you would, and you’d argue about it later. You’d say that the BBC was more important than Nature, [and] if you want to stop sending us your embargoed releases, then it’ll hurt you more than it’ll hurt us.”

Breaking embargoes is nothing new, although major cases are rare. In 1989 the University of Utah in the US invited journalists to a press conference to unveil a new type of room-temperature or “cold” fusion energy generation. But one of the scientists, Martin Fleischmann, unwittingly revealed details beforehand to a reporter from the Financial Times, which led to the UK newspaper running the story a day early. And way back in 1961, newspapers jumped the gun on new results from astrophysicist Martin Ryle that went against Fred Hoyle’s steady-state theory of the universe, in which new matter is supposed to be created endlessly to compensate for its expansion. As Hoyle’s theory was already unpopular with Christians, this led to the decisive headline from the Evening News: “The Bible was right”.

For Whitehouse, the main irritation of embargoes is that they “dragoon” everybody to report science news at the same time, which can only suit the “vested interests” of the journals. “They know that on a certain day, in certain newspapers, in certain outlets, their name is going to be mentioned,” he says. “And that is good for their advertising…I don’t see why research that is being funded by the taxpayer should be manipulated by a private company for its commercial benefit.”

Flawed articles

Naturally, press officers disagree. Ginger Pinholster, director of public programmes at Science, points out that the AAAS is a non-profit organization. She also believes that when one media outlet breaks an embargo, others must “scramble to catch up with the crowd”, sometimes producing flawed articles. “Distortion of research findings can jeopardize [both] public confidence in scientific discovery, and in funding for research, thus hindering advances that benefit society,” she says.

Most scientists contacted by Physics World were also at odds with Whitehouse’s argument. “[Embargoes] do not stop communication between scientists,” insists Andre Geim, a condensed-matter physicist from the University of Manchester in the UK. “They stop scientists making hype before [their work is published].”

If you’re frantic to be the first to publish, then frankly what you will come up with is not a very well researched story David Dickson, editor, SciDev.Net

Geim is correct to stress that, in general, scientists are not held back by embargo systems: the Ingelfinger rule adopted by Nature and Science still permits scientists to talk to each other, to speak at conferences and to upload preprints of their work to servers such as arXiv (though many are unaware that they can do the last). But the question is whether the public — and indeed scientists in unrelated fields — should also be privy to this circle of communication. In his 2006 book Embargoed Science, US journalist Vincent Kiernan suggests that shielding the public from scientists’ dialogue has the knock-on effect that they misunderstand the “essence” of scientific research, thereby leaving them open to pseudoscience. “The embargo,” he writes, “by promoting an unending stream of coverage of the ‘latest’ research findings, diverts journalists from covering the process of science, with all of its controversies and murkiness.”

Whitehouse also takes this view. “It’s a journalist’s job to reflect what’s going on, not to act as a cheerleader for science.” He adds that none of the reasons given by journal publishers for using embargoes has benefited him at all. “One of the reasons that the embargo was first put in place was to give everyone a level playing field. What journalist wants everybody to have the same chance? It’s ridiculous. Journalists by their nature are competitive and want to get their story out there first.”

No advantages

There is no doubt that the “level playing field” stymies the work of quick journalists by forcing them to wait for the embargo period to tick over. But some journalists contend that it is a small price to pay in order to communicate science accurately worldwide.

“In a lot of developing countries, journalists — even staff journalists — will be paid according to the number of words they get into a newspaper,” explains David Dickson, editor of the website SciDev.Net, which promotes effective science communication in the developing world. “And if the main stories that the news editor is interested in are the political or the sports or the crime stories, then those are the jobs that will get well paid…The cards are stacked against science reporters. If you think of all the hurdles that science journalists in developed countries face, just double them all.”

Dickson goes so far as to encourage trainee press officers to implement embargo systems. “If you’re frantic to be the first to publish, then frankly what you will come up with is not a very well researched story,” he says. As for the argument that the Ingelfinger rule inhibits reporting before publication, Dickson is not swayed. “When do you report on scientific research?” he asks. “When it’s in progress it’s actually rather boring. Why not wait until there’s something to report? That’s far more interesting.”

The truth is that even if the mainstream media were freer to cover more diverse research and incremental advances, they probably would not have the personnel to do so. For example, the Guardian, a UK daily newspaper, has nine science journalists to cover all disciplines. The New York Times has 17. BBC News has six, spread over various TV and radio programmes. In fact, such modest resources might be a symptom of present-day journalism in general. According to the recent book Flat Earth News by Nick Davies, a UK investigative journalist, on average just 20% of stories in quality UK newspapers are generated without the help of press releases or wire services. Davies has even coined a name for the modern practice: “churnalism”.

It is possible that if embargoes were removed, slower sections of the media would have to bow out of the race and switch their attention to the more overlooked journals such as Physical Review Letters (published by the American Physical Society), which has been devoid of anything resembling an embargo system for more than 30 years. Then again, they might simply give up on science reporting altogether. But whether or not embargoes are good for science communication, neither Science nor Nature has any plans to abandon them, so it looks as though they are here to stay. “They serve a lot of purposes,” Dickson continues. “They’re not written in stone. Everybody knows the rules. That’s the way journalism works.”

• Comment on this article below, or e-mail pwld@iop.org

Mission impossible

Achieving the impossible might turn out to actually be impossible, but in the process of trying we can redefine the possible. It is the difference between ambition and complacency. Again and again — especially towards the end of the 19th century — complacent scientists have made future fools of themselves by proclaiming the impossibility of things such as determining the composition of stars or discovering the ultimate structure of matter. Ambitious authors, on the other hand, write books about “impossible” science. John Horgan tried with The End of Science, and John Barrow with Impossibility: The Limits of Science and the Science of Limits. Now the physicist and science communicator Michio Kaku offers us Physics of the Impossible.

Kaku has already given us one crystal ball with his book Visions: How Science Will Revolutionize the 21st Century and Beyond — aptly published in 1999 — which assessed the immediate future of computing, quantum physics and biomolecular science. Having explored these important frontiers, Kaku now focuses his sights much further ahead. Taking various examples from contemporary science fiction as a guide, he charts the possible future course of scientific and technological exploration.

The book, however, misses a good example of when science was actually influenced by fiction in this way. In Britain in the 1930s, while complacent politicians tried to appease the Nazi regime, imaginative scientists looked at the possibility of a high-energy electromagnetic-radiation weapon to zap enemy aircraft. They quickly found out that a radio wave “death ray” was impossible, but instead discovered an effective way of detecting planes using a technique that would eventually become known as radar.

Physics of the Impossible begins with force fields, which are firmly based in physics, and concludes with the similarly weighty topics of parallel universes, precognition and perpetual-motion machines. What comes in between, however, consists mainly of a menu of geeky technology — ray guns, starships, robots and so on — as well as a dose of telepathy and psychokinesis. So although physics is up there in the title, most of the physics content is confined to the final 100 of the book’s 350 pages.

A book about the impossible should point out that the quantum world already features plenty of “impossible” things, including time travel. Like embarrassing medical conditions, other puzzles and paradoxes of quantum physics used to be banished from view, but the graceful emergence of quantum entanglement from the fog of the Einstein—Podolsky—Rosen paradox, and the implications of the Aharonov—Bohm effect for nanoscience, have changed this. On the other hand, Schrödinger’s disreputable cat is still confined to a conceptual closet. Kaku lets it out and explores the possibility of reaching all kinds of “parallel universes”, where unfamiliar things happen. Elvis still lives in one such universe.

Quantum physics also points the way to achieving teleportation, which also once purely belonged to science fiction. Now photons — and even entire atoms — can be teleported across hundreds of metres. But Kaku remains ambivalent about whether we will ever be able to teleport on the macroscopic scale. The negative energy of the Casimir effect is another quantum peculiarity become fact. However, this force only operates under microscopic dimensions, which seems to rule out using it, or something like it, to drive spaceships carrying humans, not to mention the negative implications that such travel has for those on board.

Whatever the scientific potential of his topic, Kaku invariably turns to the human dimension, which injects pace into the book. Science has taught us, however, that the human scale is arbitrary and fragile. Just because we can think about these things does not mean that we have a privileged place in the universe. If Elvis lives, for example, any topological conduit to his parallel universe would be microscopically small or fiendishly uncomfortable, so Kaku’s dogged insistence on human involvement pushes many of his prospects out of reach.

A slight deficiency with the book is that Kaku includes little on the promising technology of smart materials, although they could be useful for accomplishing his goal of invisibility. Meanwhile, his treatment of custom-built biomolecules and nanobots — which could fight disease or tune physiology — is confined to psychokinesis: the ability to influence material objects using the mind. Looking beyond current technology, the author points out that hitherto reliable extrapolations like Moore’s law cannot continue forever, and replacements for silicon could open up scary new possibilities where computers could take over from people.

Although Kaku mainly sidesteps the grim implications of contemporary arms and weapons, his chapter on ray guns points out that a weapon taming the mechanism of astronomical gamma-ray bursters would make a terrestrial thermonuclear bomb look like a fizzle. An alien civilization could use one to completely wipe us out. But if he is concerned about the future of human civilization, Kaku should perhaps have looked at the possibilities of future science for the environment and climate change. What of the possibility of using physics to control the weather? That is surely more important than alien civilizations with cyborgs or phaser guns.

He also highlights tachyons — unorthodox particles that travel faster than light and that could have been responsible for fundamental mechanisms such as the primordial inflation of the Big Bang, or the Higgs effect, which endows particles with mass. While we are about it, why not also have creationist tachyons capable of producing a universe of astronomical dimensions within a biblical timescale?

On the whole, Kaku’s future physics is mainly a reductionist “bottom-up” synthesis, which ignores collective behaviour, such as superfluidity, that also surely has a message for the future. Each chapter predictably conforms to a template — an introduction based on Star Trek or some other convenient science fiction, the conjecture, and a final hook to the next chapter. Despite these shortcomings, however, Physics of the Impossible is a stimulating and entertaining read, underlining the need for know-all scientists to avoid smug complacency. It should really have been called “Science of the Impossible”, but that title has been pounced on elsewhere.

Once a physicist: Ian Leigh

Why did you originally choose to study physics?

I always found the sciences much more satisfying than the arts — it seemed far tidier and more rewarding to be able to reach a solution to a problem rather than to interpret situations where there was no right answer. I studied maths, physics and computer science at the University of Edinburgh but chose to specialize in physics because it seemed to be so fundamental and to have so much relevance to everyday life.

How much did you enjoy the subject?

I enjoyed the practical side of physics far more than the theoretical side, but even found the theory fascinating when it could be related to real-life situations. That said, I developed a deep admiration for those of my tutors — such as Peter Higgs — who appeared to be so comfortable with theoretical concepts I could never grasp.

What did you do next?

After I graduated in 1979 I took a job at the National Physical Laboratory (NPL), where I researched the development of standards for micro-indentation hardness. The problems encountered in getting accurate and repeatable results on a microscopic scale were enormous because there were so many sources of error in the equipment, the measuring system and thematerials themselves. I submitted this research for my PhD thesis, which was examined by the late David Tabor, the undisputed master of this subject.

Why did you move away from physics?

From very early in my career I was given opportunities to develop administrative and policy skills as well as practical scientific expertise. I rapidly became a “jack of all trades” — more politely known as a “technological generalist” — and before long I found myself doing scientific administration and programme formulation rather than working at the lab bench. For example, in the late 1980s I established the LINK programme in nanotechnology, which provided UK government support for projects undertaken jointly by industrial and academic partners. This laid the foundations for some of the work that is taking place in this field today.

How did you end up working for Postwatch?

After many years of technological generalism, I eventually transferred to a purely administrative job in the Department of Trade and Industry (DTI), dealing with their regulatory programme and its impact on business. From there it was a short step to Postwatch, which was sponsored by the DTI.

What does your role there involve?

My job is to try and ensure that consumer needs are at the heart of the debate over the transformation of postal services in the UK and beyond. The postal industry worldwide is evolving very rapidly in response to changing methods of communication and customer behaviour, and traditional monopolists such as Royal Mail are having to adapt. They not only need to improve efficiency by cutting labour costs and introducing new technological solutions, but also understand the requirements of consumers and provide products that will encourage these customers to continue to value postal services. I also manage the organization’s programme of research on consumer needs. As the only country with a statutory body dedicated to understanding and articulating the needs of postal consumers, the UK is well placed to contribute to current international debates — such as the full liberalization of the European postal market.

How does your physics training help the way that you work?

I’m really good at fixing the photocopier when it jams or needs the toner replacing! More seriously, the disciplined way of thinking and analysis that a physicist develops, and the metrologist’s attention to detail, are useful in any field of work, as is the experience of presenting complex concepts and ideas to a sceptical audience. And although it’s not the result of cutting-edge scientific research, people are often surprised to learn how much technology is actually involved in modern postal operations. That said, the accuracy with which postal operators measure size and weight are not quite up to the standards of NPL!

Do you still keep up to date with any physics?

Apart from trying (and often failing!) to help my son with his sixth-form physics homework, I still take a keen interest in developments at NPL. I also read the science pages in my daily newspaper and look forward to receiving Physics World every month, which is now my most regular contact with the world of physics.

Passing of a legend

John Wheeler, who died last month at the age of 96, was one of the few true giants in physics (p7 print version only). Best known for his work on nuclear fission and general relativity — and for introducing the terms black hole, wormhole and quantum foam — Wheeler was one of the last surviving links with Niels Bohr and Albert Einstein, with whom he famously debated the meaning of quantum mechanics. Wheeler counted Richard Feynman among his PhD students and, like all the best physicists, never turned his back on the enthusiasm of youth. As he wrote in his 1998 autobiography Geons, Black Holes, and Quantum Foam, “Throughout my long career…it has been interaction with young minds that has been my greatest stimulus and my greatest reward.” Helping those minds to achieve great things is sure to be one of Wheeler’s lasting legacies.

The Bohr paradox

Niels Bohr

In his book Niels Bohr’s Times, the physicist Abraham Pais captures a paradox in his subject’s legacy by quoting three conflicting assessments. Pais cites Max Born, of the first generation of quantum physics, and Werner Heisenberg, of the second, as saying that Bohr had a greater influence on physics and physicists than any other scientist. Yet Pais also reports a distinguished younger colleague asking with puzzlement and scepticism “What did Bohr really do?”.

We can sympathize with that puzzlement. In history books, Bohr’s chief contribution to physics is usually said to be “the Bohr atom” — his application in 1912–3 of the still-recent quantum hypothesis to overcome instabilities in Rutherford’s “solar-system” model of the atom, in which electrons travelled in fixed orbits around a positively charged nucleus. But this brilliant intuitive leap, in which Bohr assembled several puzzling features from insufficient data, was soon superseded by more sophisticated models.

Bohr is also remembered for his intense conversations with some of the founders of quantum mechanics. These include Erwin Schrödinger, whom Bohr browbeat into a (temporary) retraction of his ideas; Heisenberg, who broke down in tears under Bohr’s relentless questioning; and Einstein, whom Bohr with debated for years. Bohr is remembered, too, for “complementarity” — an ordinary-language way of saying that quantum phenomena behave, apparently inconsistently, as waves or particles depending on how the instruments that measure them are set up, and that we need both concepts to fully capture the phenomenon.

He has, though, been mocked for the supposed obscurity of his remarks on this subject and for extending the idea to psychology, anthropology, free will, love and justice. Bohr has also been wrongly blamed for mystical ideas incorrectly ascribed to the “Copenhagen interpretation” of quantum mechanics (a term Heisenberg coined), notably the role of the subjectivity of the observer and the collapse of the wave packet.

Now Bohr is back in focus. Publishing giant Elsevier is this month putting the massive 12-volume Niels Bohr Collected Works online for the first time and has also created a print index for the entire set, which contains Bohr’s extensive correspondence and writings about various aspects of physics and society. Most of volume 10 and much of volumes 6 and 7, for instance, are about complementarity. The time is therefore ripe for re-evaluating Bohr and clarifying the “Bohr paradox”: why he is both revered and underappreciated?

What Bohr did

Bohr practised physics as if he were on a quest. The grail was to fully express the quantum world in a framework of ordinary language and classical concepts. “[I]n the end,” as Michael Frayn has Bohr’s character say in the play Copenhagen, “we have to be able to explain it all to Margrethe” — his wife and amanuensis who serves as the onstage stand-in for the ordinary (i.e. classically thinking) person.

Many physicists, finding the quest irrelevant or impossible, were satisfied with partial explanations — and Heisenberg argued that the mathematics works: that’s enough! Bohr rejected such dodges, and rubbed physicists’ noses in what they did not understand or tried to hide. However, he did not have an answer himself — and he knew it — but had no reason to think one could not be found. His closest approximation was the doctrine of complementarity. While this provoked debate among physicists on the “meaning” of quantum mechanics, the doctrine — and discussion — soon all but vanished.

Why? The best explanation I have heard is advanced by the physicist John H Marburger, who is currently science advisor to US President George Bush. By 1930, Marburger points out, physicists had found a perfectly adequate way of representing classical concepts within the quantum framework using Hilbert (infinite-dimensional) space. Quantum systems, he says, “live” in Hilbert space, and the concepts of position and momentum, for instance, are associated with different sets of coordinate axes that do not line up with each other, thereby resulting in the situation captured in ordinary-language terms by complementarity.

“It’s a clear, logical and consistent way of framing the complementarity issue,” Marburger explained to me. “It clarifies how quantum phenomena are represented in alternative classical ‘pictures’, and it fits in beautifully with the rest of physics. The clarity of this scheme removes much of the mysticism surrounding complementarity. What happened was like a gestalt-switch, from a struggle to view microscopic nature from a classical point of view to an acceptance of the Hilbert-space picture, from which classical concepts emerged naturally. Bohr brokered that transition.”

Thus while Bohr used the notion of complementarity to say that quantum phenomena are both particles and waves — somewhat confusingly, and in ordinary-language terms — the notion of Hilbert space provided an alternate and much more precise framework in which to say that they are neither. Yet the language is abstract, and the closest outsiders can come to grasping it is Bohr’s awkward and imperfect notion.

The critical point

In the first generations of quantum theory, Bohr was revered for leading the quest to keep the field together within a single framework expressible in ordinary language and classical concepts. The Bohr paradox arises because the results of the quest were manifested not in citation indices linked with Bohr’s name, but in an increased integrity of thought that pervaded the entire field, which has proved hard for subsequent generations of physicists — and even historians — to appreciate.

If Bohr’s quest had a specific result, it was the idea of complementarity. But physicists soon found a more effective and satisfying way to represent quantum phenomena in a technical language using coordinate systems in Hilbert space. Scientists need to pursue possible and important paths even if they do not pan out. Bohr’s greatness was to recognize the importance of this quest, and to relentlessly carry it out with insight and passion. If it did not succeed, and if in the end he would not be able to explain it all to Margrethe — for whom it would have to remain esoteric — that was nature’s doing and not Bohr’s failing. It should not diminish our appreciation of his achievement.

The heat is on

Planes travelling from Europe to the west coast of the US usually fly directly over Greenland. Most passengers miss it, but if you have a window seat and keep watch at about the time that the dinner trays are being cleared away, then you may be lucky enough to catch a glimpse of a truly majestic landscape in which massive glaciers, fed from a vast and featureless ice sheet, spill into iceberg-choked fjords. Although your plane will be thousands of metres overhead, these remote glaciers are nonetheless feeling the impact of human activities like air travel. As the temperatures over Greenland rise as a result of climate change, the speed at which many of these glaciers are moving is increasing so rapidly that more ice is being lost from the ice sheet than is being replaced by new snowfall. In other words, the ice sheet is giving up its mass to the oceans, and, as a result, sea levels are rising.

The rate of sea-level rise has startled both scientists and policy-makers enough to make headlines and become embedded in government and international reports. It is easy to see why they are concerned — even a half-metre rise would cause flooding that would affect hundreds of millions of people in low-lying areas. Suddenly, “glacier dynamics” — the physics that controls how fast glaciers flow — has become a subject of international importance.

The 2007 report from the Intergovernmental Panel on Climate Change (IPCC) cites retreating glaciers and rising sea levels as evidence that warming of the climate system is unequivocal. And with enough water stored in the Greenland and Antarctic ice sheets to raise global sea levels by approximately 7 m and 57 m, respectively, being able to predict how these large ice sheets will behave in a warming climate is critical if we are fully to understand the consequences of climate change.

In the May issue of Physics World, Tavi Murray, Ian Rutt and David Vaughan describe how physicists can predict the movement and melting of glaciers.

To read the full version of this article — and the rest of the May issue of Physics World — please subscribe to our print edition.

Auroral light is polarized after all

Fifty years ago an Australian physicist called Bob Duncan reported that light from the Aurora Australis (or Southern Lights) was polarized. Although his discovery could have provided a new way of studying the atmosphere of Earth, other scientists at the time were unconvinced. Duncan’s finding was quickly cast aside and the prevailing wisdom for the last half century has been that such light is not polarized. Now, however, an international team of physicists has made a similar measurement from the Arctic island of Svalbard, which suggests that Duncan was right all along.

The Aurora Borealis (Northern Lights) and Aurora Australis are spectacular displays of light that can be seen at high and middle latitudes. Aurorae occur when charged particles (mostly electrons and protons) ejected by the Sun are focussed and accelerated by the Earth’s magnetic field. These particles are believed to follow helical paths along Earth’s magnetic field lines.

Duncan considered what would happen when such twirling electrons collide with gas atoms about 200 km up in the atmosphere. He proposed that the electrons transfer energy to the atoms, leaving them in a specific excited quantum state. When they return to the ground state, the atoms emit light with a specific polarization through the process of fluorescence. And on one night in 1958, he measured a degree of polarization of 30% in light coming from the southern sky over Tasmania.

Jostled atoms

However, leading physicists of the day challenged Duncan’s finding because, in the process of emitting the light, the atom is expected to remain in the excited state for some time. During this time, it would be jostled about by other gas atoms, which — or so fellow researchers of the time argued — should destroy the polarization. As a result, Duncan’s observation was quickly discredited and forgotten.

Now, Roland Thissen and colleagues at the Planetary Science Laboratory at the CNRS Planetary Science Laboratory in Grenoble, France, and collaborators in the Netherlands and Norway have confirmed that auroral light is indeed polarized (Geophys Res Lett 35 L08804). The team focussed on red light emitted from fluorescing oxygen atoms and found that it has a maximum polarization of about 6% when it reached their instrument.

This figure was seen during in the “quiet” times between visible aurorae events when light is still being created even though it cannot be seen by the unaided eye. The polarization dropped to about 2% when an aurora was visible.

This drop, according to Thissen, means that Duncan’s critics were at least partially right. During visible aurorae the colliding electrons are thought to have higher energies than at quiet times. This means the electrons transfer more of their energy to the oxygen than at quiet times — and the process of giving up this extra energy before emitting red light tends to “smear” the polarization, said Thissen.

The observation was made using a standard optical technique involving splitting the light from the region of the aurorae into two channels, and passing one channel through a polarization filter. The polarization of the light can be determined by comparing the intensity of light in the filtered and unfiltered channels.

This is a standard technique that could be adapted to study the polarization of light from aurorae on planets such as Saturn and Jupiter. It could even be used to examine light coming from “exoplanets” orbiting other stars, says Thissen. This could help astronomers gain a better understanding of the both the magnetic fields and atmospheres surrounding these distant worlds.

Here on Earth, polarization studies could lead to a better understanding of quiet times in the aurorae and why the lights can suddenly flare up — often disrupting radio communications and other technologies.

Enigmatic measurement

Sadly, Duncan did not live to see his ideas revived — he died in 2004 — and Thissen said that it was unfortunate that they could not invite him to participate in their research. He said that Duncan’s measurement of 30% polarization remains an “enigma” because it represents the maximum polarization of light emitted from oxygen — something that can be seen in the lab, but should not be seen in light that is created in the atmosphere and then travels over 200 km before being detected.

Dawn of the memristor

As all popular electronics textbooks will verify, any passive circuit can be created with a combination of just three standard components: a resistor, which opposes charge flow; an inductor, which opposes any change in the flow of charge; and a capacitor, which stores charge. But, if research by physicists in the US is anything to go by, the textbooks may have to be appended with a fourth standard component: a “memristor”.

Memristance will herald a paradigm shift not only in engineering, but also in the sciences Leon Chua, University of California at Berkeley

In simple terms, a memristor “remembers” the amount of charge that has flowed through it and as a result changes its resistance. The effect was predicted in 1971 by electronics engineer Leon Chua, but the only clues that it actually exists have been in the reports of strange “hysterisis” loops in the current–voltage relationships of thin-film devices. This means that when the voltage increases the current follows a different relationship to when the voltage decreases.

“[Scientists] essentially reported this behaviour as a curiosity or a mystery with some attempt at a physical explanation for the observation,” says Stanley Williams of Hewlett Packard Labs in Palo Alto, California. “No-one connected what they were seeing to memrisistance.” Now, Williams and colleagues from Hewlett Packard have made the first memristors.

Model behaviour

To make their memristors, Williams’s team first considered how memristance might originate at the atomic level. They came up with an analytical model of a memristor that consists of a thin piece semiconductor containing two different regions: a highly doped region, which has a low resistance, and a zero-doped region, which has a high resistance. When a voltage is applied across the semiconductor, it causes some of the dopants to drift so that the combined resistance changes, thereby producing the characteristic hysterisis effect of memristance.

To put this model into practice, Williams’s team attached a layer of doped titanium dioxide to a layer of undoped titanium dioxide. Through current–voltage measurements, they found that it did indeed exhibit the hysterisis effect of memrisistance (Nature 453 80).

Chua told physicsworld.com that he is “truly impressed” that the Californian team has proved his theory. “Most of the anomalous behaviours that have been widely reported in the nano-electronics literature over the last decade can now be understood as simply the manifestation of memristive dynamics,” he says. “It will herald a paradigm shift not only in engineering, but also in the sciences.”

Williams says his team has already made and tested thousands of memristors, and has even used them in circuits containing integrated circuits. Because the hysterisis of memristors makes them able to operate like a switch, the team are now looking at how they can exploit memristance for tasks normally reserved for digital-logic electronics. These include a new form of non-volatile random access memory, or even a device that can simulate synapses — that is, junctions between neurons — in the brain.

LHC magnets pass test

LHC.jpg

On April 3 last year, the left-hand side of the Fermilab Today website had a graphical weather forecast depicting storm clouds. It was a fitting metaphor for the mood of the US lab, which had recently discovered that one of the “quadrupole” magnets it supplied the European lab CERN for the Large Hadron Collider (LHC) had failed a preliminary test. On the right-hand side of the website, Pier Oddone, the director of Fermilab, admitted they had taken “a pratfall on the world stage”.

Indeed they had. The failure meant they had to replace all similar magnets with redesigned models and skip the low-energy test runs that were due to take place before winter. It also added to the problems that forced CERN to delay the LHC’s (already repeatedly delayed) start up from May to July this year.

Now, though, everything looks to be well again. On the right-hand side of Fermilab Today, Oddone writes that the first of the replaced magnets has passed the test it failed last year. He writes that the 50 or so scientists, engineers and technicians at CERN who made the repairs deserve “a crown”. And the left-hand side of the website is forecasting sunshine.

The original problem was that the magnets had inadequate support to withstand the forces produced during “quenching”. This is when a magnet gets warmed up above its 1.9 K operating temperature, and could happen happen, for example, if one of the LHC’s proton beams veers off course. Last Friday the replaced magnet passed the one-hour test designed to simulate quenching.

“Everyone commissioning the LHC,” writes Oddone, “both accelerator and detectors, is racing excitedly towards colliding-beam operation and the great physics results that we can almost taste.”

Copyright © 2025 by IOP Publishing Ltd and individual contributors