Skip to main content

Phoenix reveals Martian permafrost

Polygons similar in appearance to surface patterns in Earth’s arctic regions are among the first features identified by NASA’s Phoenix mission, which touched down on Mars early yesterday morning at 0053 GMT.

The features imply that the landing area around Phoenix has permafrost, which is known to generate polygonal patterns on Earth by continual expansion and contraction. Although polygons had been spotted before from space, these ones seen at close range appear to be somewhat smaller — 1.5 to 2.5 m across — leading some NASA scientists to suggest that there is a hierarchy of “polygons within polygons”. As of yet, there have been no glimpses of surface ice.

Other images taken by Phoenix’s onboard cameras confirm that the spacecraft is in “good health”, having endured a nine-month, 679 million-mile journey and a tricky landing involving descent engines — the first time this type of landing has been performed successfully since 1976. NASA scientists are relieved that Phoenix did not suffer the fate of its two predecessors — the Mars Climate Orbiter and the Mars Polar Lander — which both failed in 1999.

Phoenix is now preparing to begin its three-month mission on Mars to investigate the origin of the ground ice, the operation of climate cycles and the possibility of microbial life.

UPDATE 29/05/08: You can listen to a sound recording of Phoenix’s landing here.

New tests of the Copernican Principle proposed

Revolutions in science don’t come that often, but the book De revolutionibus orbium coelestium (On the revolutions of the heavenly spheres) published in 1543 certainly caused one. The work by Nicolas Copernicus overthrew the ‘geocentric’ model of the solar system where Earth is at the centre, and suggested an alternative view whereby Earth revolves around the Sun.

No-one disputes the fact that the sun is at the centre of our solar system, and no-one also seems to dispute the idea that we are not at the centre of the universe. Indeed, this is encapsulated in a principle known as the ‘Copernican Principle’ that states that the Earth is not in any specially favoured position and is taken as a fait accompli among researchers. But how can we test it? Two independent teams of physicists think they know how, and argue their cases in back-to-back papers in the journal Physical Review Letters.

In the first paper, Robert Caldwell from Dartmouth College and Albert Stebbins from Fermi National Laboratory in the US explain how the Cosmic Microwave Background (CMB) radiation spectrum — an all pervasive sea of microwave radiation originating just 380 000 years after the Big Bang — could be used to test whether the Copernican Principle stands (Phys. Rev. Lett. 100 191302).

Cosmic acceleration and dark energy

Cosmologists like Caldwell and Stebbins are interested in the Copernican Principle because it plays an important role in the interpretation of the observational evidence for cosmic acceleration and dark energy. If the Copernican Principle is invalid, then there may not be any need for exotic dark energy and in order to explain the observation for the acceleration of the universe, we would need to be living at the center of a ‘void’. This void would then leave a distortion in the CMB away from being a black body. “This is so fundamental that we need to test it”, explained Caldwell.

I would bet my house now that the results will come out null so the Copernican Principle is valid on the scales we observe Paul Steinhardt, Princeton University

To measure if this holds, the team propose to measure the black body nature of the CMB more precisely than before. The void would lead to large anisotropies of scattered light coming from the CMB giving a slight deviation in the CMB from being a black body. But if we see further evidence that the CMB is a black body then it will be evidence that the Copernican Principle holds.

However, it is already accepted that the CMB is a black body. Indeed, the Nobel Prize in Physics was awarded to John Mather and George Smoot in 2006 who showed the CMB is a black body and is also anisotropic. The Nobel Prize winning work came from data collected on NASA’s Cosmic Background Explorer (COBE), which housed the Far Infrared Absolute Spectrophotometer (FIRAS) which recorded the perfect black body spectrum.

Caldwell and Stebbins think the next NASA missions such as the Absolute Spectrum Polarimeter will be able to detect possible deviations from the black body behaviour of the CMB which COBE with FIRAS was not sensitive too. “We need to measure the CMB at different frequencies, which previous missions were not able to do” said Caldwell.

Another test

In a separate paper, Jean-Philippe Uzan from the Pierre and Marie Curie University in France along with Chris Clarkson and George Ellis from the University of Cape Town in South Africa suggest another way to test the Copernican Principle (Phys. Rev. Lett. 100 191303). Their scheme involves measuring the red-shift of galaxies — the shift in wavelength of light to longer wavelengths due to a speedup — very precisely over time to see if there are changes. The team argues that this red-shift data can be combined with measurements of the distance of the galaxies to infer if the universe is spatially homogeneous — which is a tenant of the Copernican Principle.

However, it seems one of the cornerstones of cosmology is not about to be quickly overturned. “I would bet my house now that the results will come out null so the Copernican Principle is valid on the scales we observe,” says Paul Steinhardt a cosmologist at Princeton University, “But I think the experiments should be done.”

Spin states endure in quantum dot

Some physicists believe that quantum computers of the future will be built from large numbers of quantum dots — tiny pieces of semiconductor, each containing an electron (or hole) in a certain quantum spin state. However, such quantum states are easily destroyed by interference from external noise, and physicists have yet to create quantum dots — or any other system — that are robust enough to be used in a practical quantum computer.

Now, physicists in the UK and Brazil have taken an important step towards the creation of quantum dots with sufficiently robust spin states (Phys. Rev. Lett. 100 197401). Andrew Ramsay of the University of Sheffield and colleagues have shown that they can control the “trion” state (two holes plus one electron) of a single quantum dot using ultrashort laser pulses. The technique allows for a large number (up to 105) of logic operations to be performed before the quantum state is destroyed.

Quantum computers will work on the principle that a quantum particle can be in two states at the same time — “spin up” or “spin down” in the case of an electron or hole (a hole is left behind when an electron is excited to higher energy levels within a material). The two spin states represent a logical “1” or a “0”, so N such particles — or quantum bits (qubits) — could be combined or “entangled” to represent 2N values simultaneously. This would lead to the parallel processing of information on a massive scale not possible with conventional computers.

Extremely long coherence times

Semiconductor quantum dots are nanoscale structures in which electrons or holes are confined in all three directions. The dots can contain just one electron or hole each and are promising for use as qubits since information can be stored in the spin state of a single electron or hole. The quantum dot spins also have the potential to be very stable — they could have extremely long “coherence” times of micro- to milliseconds. The challenge, however, is how to control and connect the quantum dots without reducing their stability.

The team’s quantum dot is a disk of the semiconductor indium-gallium-arsenide, measuring 20 nm in diameter and 3 nm thick and embedded in a photodiode structure. The team shone a laser on the device, which creates an electron-hole pair in the dot. An electric field is applied across the structure, which causes the electron to tunnel from the dot, which leaves the hole in a well-defined spin state. Later, the hole also tunnels from the dot and is detected as a photocurrent.

A circularly-polarized laser pulse then measures the spin state of the hole by trying to create a trion (by creating an additional electron hole pair). If the laser is right circularly polarized, a trion only forms if the initial hole is “spin-down”, for example. This is because the Pauli exclusion principle prevents the two holes from being in the same quantum state. The creation of a trion can be detected as a change in the photocurrent, which is proportional to a particular spin state of the hole, explained Ramsay.

Picosecond single qubit manipulations

“This is a new tool for studying the dynamics of a single spin on sub-nanosecond timescales,” he told physicsworld.com. “It will be essential for evaluating the performance of picosecond single qubit manipulations.”

In the short term, Ramsay believes that the technique will provide a “tool-box” for studying the optical control of a single spin, and to explore schemes for creating high-fidelity quantum logic gates for quantum computing. “Our next goal is to observe coherent spin precession of the hole spin in a magnetic field,” he revealed. “We then intend to pursue full coherent optical control of a single spin.”

Cold-fusion demonstration “a success”

ColdFusion.jpg

On 23 March 1989 Martin Fleischmann of the University of Southampton, UK, and Stanley Pons of the University of Utah, US, announced that they had observed controlled nuclear fusion in a glass jar at room temperature, and — for around a month — the world was under the impression that the world’s energy woes had been remedied. But, even as other groups claimed to repeat the pair’s results, sceptical reports began trickle in. An editorial in Nature predicted cold fusion to be unfounded. And a US Department of Energy (DOE) report judged that the experiments did “not provide convincing evidence that useful sources of energy will result from cold fusion.”

This hasn’t prevented a handful of scientists persevering with cold-fusion research. They stand on the sidelines, diligently getting on with their experiments and, every so often, they wave their arms frantically when they think have made some progress.

Nobody notices, though. Why? These days the mainstream science media wouldn’t touch cold-fusion experiments with a barge pole. They have learnt their lesson from 1989, and now treat “cold fusion” as a byword for bad science. Most scientists* agree, and some even go so far as to brand cold fusion a “pathological science” — science that is plagued by falsehood but practiced nonetheless.

[*CORRECTION 29/05/08: It has been brought to my attention that part of this last sentence appears to be unsubstantiated. After searching through past articles I have to admit that, despite it being written frequently, I can find no factual basis that “most scientists” think cold fusion is bad science (although public scepticism is evidently rife). However, there have been surveys to suggest that scientific opinion is more likely divided. According to a 2004 report by the DOE, which you can read here, ten out of 18 scientists thought that the hitherto results of cold-fusion experiments warranted further investigation.]

There is a reasonable chance that the naysayers are (to some extent) right and that cold fusion experiments in their current form will not amount to anything. But it’s too easy to be drawn in by the crowd and overlook a genuine breakthrough, which is why I’d like to let you know that one of the handful of diligent cold-fusion practitioners has started waving his arms again. His name is Yoshiaki Arata, a retired (now emeritus) physics professor at Osaka University, Japan. Yesterday, Arata performed a demonstration at Osaka of one his cold-fusion experiments.

Although I couldn’t attend the demonstration (it was in Japanese, anyway), I know that it was based on reports published here and here. Essentially Arata, together with his co-researcher Yue-Chang Zhang, uses pressure to force deuterium (D) gas into an evacuated cell containing a sample of palladium dispersed in zirconium oxide (ZrO2–Pd). He claims the deuterium is absorbed by the sample in large amounts — producing what he calls dense or “pynco” deuterium — so that the deuterium nuclei become close enough together to fuse.

So, did this method work yesterday? Here’s an email I received from Akito Takahashi, a colleague of Arata’s, this morning:

“Arata’s demonstration…was successfully done. There came about 60 people from universities and companies in Japan and few foreign people. Six major newspapers and two TV [stations] (Asahi, Nikkei, Mainichi, NHK, et al.) were there…Demonstrated live data looked just similar to the data they reported in [the] papers…This showed the method highly reproducible. Arata’s lecture and Q&A were also attractive and active.”

I also received a detailed account from Jed Rothwell, who is editor of the US site LENR (Low Energy Nuclear Reactions) and who has long thought that cold-fusion research shows promise. He said that, after Arata had started the injection of gas, the temperature rose to about 70 °C, which according to Arata was due to both chemical and nuclear reactions. When the gas was shut off, the temperature in the centre of the cell remained significantly warmer than the cell wall for 50 hours. This, according to Arata, was due solely to nuclear fusion.

Rothwell also pointed out that Arata performed three other control experiments: hydrogen with the ZrO2–Pd sample (no lasting heat); deuterium with no ZrO2–Pd sample (no heating at all); and hydrogen with no ZrO2–Pd sample (again, no heating). Nevertheless, Rothwell added that Arata neglected to mention certain details, such as the method of calibration. “His lecture was very difficult to follow, even for native speakers, so I may have overlooked something,” he wrote.

It will be interesting to see what other scientists think of Arata’s demonstration. Last week I got in touch with Augustin McEvoy, a retired condensed-matter physicist who has studied Arata’s previous cold-fusion experiments in detail. He said that he has found “no conclusive evidence of excess heat” before, though he would like to know how this demonstration turned out.

I will update you if and when I get any more information about the demonstration (apparently there might be some videos circulating soon). For now, though, you can form your own opinions about the reliability of cold fusion.

Astronomers watch as star dies

A chance observation using NASA’s Swift satellite has provided the most detailed account yet of a star exploding into a supernova.

Alicia Soderberg of Princeton University, US, happened to be monitoring the aftermath of a month-old supernova with Swift’s X-ray telescope on 9 January this year when she spotted a burst of radiation in the same galaxy. “When I saw this exciting new source, I originally considered that it may be some other flavour of energetic cosmic explosion, unrelated to massive star death,” she told physicsworld.com.

The source, now identified as SN 2008D, marks the first time a star has been caught turning into a “type-Ib” supernova (Nature 453 469).

Several hundred supernovae are recorded every year in the nearer regions of the universe, although the light that typically signals the event to us is generated several days after initial explosion. The likelihood of two supernovae occurring in the same galaxy in a single month is one in 10,000, and there is an even smaller chance of someone watching it during the initial five-minute X-ray burst. “I certainly got lucky, but they say luck favours the prepared,” Soderberg says.

The trick is knowing where to look and when Alicia Soderberg, Princeton University

Shock wave

Only stars significantly more massive than our Sun turn into supernovae, and of these there are several classes. Type-Ib supernovae are thought to occur when the core of a “Wolf-Rayet” star, which is some 20 times the mass of the Sun, runs out of helium to fuel nuclear fusion, collapses and generates an intense shock wave.

As this shock wave expands outwards, it breaks out through the surface layers into the star’s “wind” of charged particles, whereupon it produces a burst of X-rays. When Soderberg glimpsed this event for SN 2008D on her computer screen, she and her colleague Edo Berger alerted eight other ground- and space-based telescopes to study it. “The trick is knowing where to look and when,” she says. “The X-rays that accompany the explosion are extremely bright but very short lived.”

Although the observations taken of the early shockwave-breakout of SN 2008D reveal little about the explosion mechanism of type-Ib supernovae, they do provide a better understanding of the star’s outer layers, its mass-loss rate and — roughly — its explosion energy, which was about 1051 erg (equivalent to 1027 one-megaton hydrogen bombs).

“We never observe the stars themselves at such a late point [before the explosion]; the time is too brief,” says Stan Woosley of the University of California at Santa Cruz. “Yet the star is doing very different things in its last few years — oxygen burning the last few months, silicon burning the last few days, and its mass structure could change a lot. Now we know it doesn’t.”

An all-sky X-ray telescope could pinpoint hundreds of supernovae as they explode Jens Hjorth, University of Copenhagen

Other events

Perhaps the most fruitful consequence of Soderberg and Berger’s observation is that it gives a precise time when the X-ray emission occurs. This will help physicists look for events related to astronomical explosions, such as gravitational waves and neutrino bursts. It will also allow other core-collapse supernovae to be spotted by looking for the particular X-ray signature.

“An all-sky X-ray telescope could pinpoint hundreds of supernovae as they explode,” says Jens Hjorth of the University of Copenhagen. “No doubt, such a wealth of information would be invaluable in elucidating the nature of supernovae, their progenitors and the detailed physics involved.”

The publication of the supernova X-ray observation by Soderberg and Berger comes just days after Swift — which is maybe better known for its data on gamma rays — was ranked as the highest priority orbiting mission in a review by NASA.

Gravity Probe B comes last in NASA review

A ‘senior review’ of NASA’s astrophysics missions has concluded that a satellite that is trying to measure gravitational effects predicted by Einstein’s theory of general relativity should receive no additional funding after this September.

The decision is quite surprising, as we believe we were making good progress in the data analysisBill Bencze, Stanford University

The 15 member panel report — obtained by physicsworld.com — was commissioned by NASA to analyse 10 of its astrophysics missions that are currently in orbit around Earth. Although the review concludes that 9 out of the 10 missions should be extended as long as “sufficient funding were available”. The panel noted that Gravity Probe B (GP-B), which was ranked bottom, “failed to reach its goals” and therefore should not receive anymore money.

Geodetic and frame-dragging effects

Initially conceived in the 1960s, the mission was launched in 2004 and has cost around $750m. The probe is a collaboration between NASA and Stanford University that aims to measure — by using four spherical quartz gyroscopes — two principles predicted by Einstein’s theory of general relativity: the ‘geodetic’ effect — the amount Earth warps the local space-time in which it resides and the more subtle ‘frame-dragging’ effect — the amount by which the rotating Earth drags its local space-time around with it.

The GP-B team led by Francis Everitt of Stanford University, last year reported only successful measurements of the geodetic effect, with no evidence for the frame dragging effect. However, not only had NASA’s Cassini mission also measured the geodetic effect, but the report concluded that “the GP-B experiment has been overtaken by events and now only occupies a diminished niche in the field.”

The report says that future missions such as LISA — that will search for gravitational waves as predicted by Einstein’s theory of general relativity — will be more powerful and therefore it is “difficult to determine whether GP-B can improve our understanding of gravity”.

Unexpected torques

However, Bill Bencze programme manager for GP-B who is also based at Stanford University says “the decision is quite surprising, as we believe we were making good progress in the data analysis.” Although noise due to solar flares interrupted the satellites observations in 2005, as well as unexpected torques on the gyroscopes that changed their orientation, Bencze is hopeful they can get good results and to find “firm evidence” of the frame dragging effect. This would be the first direct evidence of the effect rather than indirect evidence as provided by NASA’s LAGEOS satellite. However, the report disagrees that further data analysis will yield results “it will be difficult if not impossible to rule out overlooked systematics at the level they are trying to reach,” the report states.

Indeed, according to Bencze, the team of 10 or so people who are looking through the data will need funds of around $3m to complete the project. “This is trivial compared to other missions on the list,” says Bencze. It is estimated that the Chandra X-ray observatory, placed second in the list (see below), will cost around $50m to keep it operational.

The final ranking of the missions is:

  1. SWIFT
  2. Chandra
  3. GALEX
  4. Suzaku
  5. Warm Spitzer
  6. WMAP
  7. XMM-Newton
  8. INTEGRAL
  9. RXTE
  10. Gravity Probe B

Looking for ET’s neutrino beam

For several decades scientists have been using telescopes to scan the heavens for unnatural-looking radio or optical transmissions coming from intelligent alien life. With this search for extraterrestrial intelligence (SETI) having so far failed to pick up a single signal, however, researchers in the US now believe it is worth extending the search beyond electromagnetic waves and start paying attention to neutrinos.

John Learned of the University of Hawaii and colleagues have worked out that advanced alien civilizations could send messages within the Milky Way using neutrinos, and that these messages could be picked up using neutrino detectors currently under construction here on Earth (arXiv:0805.2429).

This may seem like an odd proposal because neutrinos are in fact extremely difficult to detect, since they interact very weakly with ordinary matter. This means that neutrino observatories are hard to build — requiring vast amounts of detecting material and located deep underground or under sea or ice — and even the most sophisticated detect very few particles.

Low-noise communications

But Learned and colleagues Sandip Pakvasa of the University of Hawaii and Tony Zee at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara believe that neutrino communications offer several advantages over electromagnetic waves. Radio or optical signals can be blocked by material within the galaxy, for example, and the radiation that does make it through is obscured by numerous sources of electromagnetic noise. Neutrinos, on the other hand, pass through the galaxy virtually unimpeded and, if highly energetic, are extremely rare and therefore do not suffer from background interference.

The US researchers assume that alien neutrino beams would be pulsed and directional, and that the messages would probably be sent in something akin to Morse code – with a varying time interval between pulses used to encode the information. They also believe that an advanced civilization would not use neutrinos with energies of less than about a million electron-volts, in order to avoid any interference from neutrinos produced by natural radioactive decay and solar processes. They suggest that SETI hunters should target a specific energy of 6.3 petaelectron-volts (PeV) , which is 6.3×1015 eV. This is the energy at which the “Glashow resonance” takes place, whereby an electron antineutrino interacts with an electron to create a W particle.

Enormous amounts of energy

Learned and colleagues have put forward two ways of producing such neutrinos. The first of these involves colliding electrons and positrons at an energy equal to the mass of the Z0 particle, a relatively simple process in principle but one that would require enormous amounts of energy – about 3% of the Sun’s power output for neutrinos to be sent over a distance of 3000 light years.

The second approach instead involves firing protons at a target, accelerating the pions that emerge to around 30 PeV, and then separating out the pion decay products (muons and muon neutrinos). This process could in fact be carried out using the power output of proposed thermonuclear power plants, and would have the added advantage of being able to produce both neutrinos and antineutrinos (switching between the two would provide an additional way of encoding messages). Accelerating the pions to such high energies would be a huge challenge but “not wildly implausible for a future civilization”, according to Learned.

Next-generation neutrino telescopes

As to our ability to intercept such messages, the researchers believe that this will be possible soon using next-generation neutrino telescopes with a detector volume of around 1 km3. These include the IceCube telescope under construction at the South Pole and a possible successor to the ANTARES, NEMO and NESTOR observatories in the Mediterranean. This is a view shared by Francis Halzen, principal investigator of IceCube. Indeed, observations would be clear cut since there are no known natural mechanisms for making neutrinos at 6.3 PeV — detecting two or more of these particles would be a tell-tale sign that they had been artificially produced.

Learned and colleagues believe it is important to keep neutrino telescopes running for extended periods. They point out that extraterrestrial civilizations would have no way of knowing when to transmit, since their messages may take tens of thousand of years to reach their intended recipients (the Milky Way is thought to be some 100,000 light years across) and it would be impossible to predict exactly when a life-friendly planet would become industrialized. Any intelligent beings out there may therefore decide to send messages periodically, and we cannot predict what this period would be, Learned adds. “If there are signals there it will be obvious,” he says. “But we will have to keep looking.”

BEC bubble could measure tiny forces

Two physicists in the US have come up with a way to use a “bubble” of ultracold atoms to measure extremely small forces. The scheme, which has yet to be tested experimentally, involves monitoring the motion of a bubble of one type of atomic gas that is surrounded by another atomic gas. The physicists claim that accelerations as small as 10-10 m/s2 could be detected — allowing the system to be used to perform new tests of the gravitational inverse-square law or to study the forces on individual atoms.

The measurements would take place in a Bose-Einstein condensate (BEC) — a gas of bosons (atoms of integer spin number) that are cooled to such low temperatures that they fall into the same quantum state. Over the past decade physicists have perfected the creation of BECs using crisscrossing laser beams to trap the ultracold atoms and using applied magnetic fields to finely tune the interactions between the atoms.

More recently, physicists have worked out ways to make BECs that contain mixtures of two different types of atoms — say “A” and “B”. By adjusting the magnetic fields such that A and B tend to repel each other, the atoms can be separated into two different phases.

‘Buoyancy’ force

Now, Satyan Bhongale of Rice University and Eddy Timmermans at Los Alamos National Laboratory have proposed a system that involves a phase-separated mixture in which one component (say B) forms a bubble in the other (Phys Rev Lett 100 185301).

Timmermans told physicsworld.com that the bubble would be subject to a force that tended to push the bubble from the middle of the trap to the edge. This “buoyancy” force, he explained, is similar to the force that causes an air bubble to rise in water. The team believes that by adjusting the laser beams that are trapping the BEC, this buoyancy force can be exactly cancelled, causing the bubble to float at the centre of the trap.

The position of the bubble could be monitored by shining two relatively weak laser beams through the bubble such that they intersect at its centre. The slightest movement of the bubble by an external force could be detected by carefully monitoring the laser beams. In some ways, the system is similar to a spirit level, which offsets the buoyancy of a bubble against the force of gravity, leading Bhongale and Timmermans to dub their system a “BEC level”.

Measuring gravity over micrometres

According to Timmermans, the level could measure gravitational acceleration to about one part in 10 billion. While this is on par with existing schemes to test gravity such as the torsion pendulum, the BEC level could reach this precision on length scales as small as a micrometre – much smaller than the millimeter distances probed by the best existing experiments. As a result, the BEC balance has the capability to reveal deviations from the familiar inverse-square law of gravitation, which could ultimately help physicists overcome one of the outstanding challenges of physics — how to unify gravity with the three other fundamental forces of nature.

In addition, Timmermans believes that the BEC level could be used to study the Casimir-Polder forces that are experienced by atoms that are near to a surface. Such forces have proved very difficult to measure but are of interest to nanotechnologists because they appear to play an important role in how atoms are organized on a surface.

While Timmermans is not aware of any experimental groups that are currently trying to build a BEC level, he believes that several groups around the world have the required experimental expertise.

One such group is led by Nobel laureate Carl Wieman at the University of Colorado and recently demonstrated that a BEC of rubidium-85 and rubidium-87 atoms could be separated into two different species (arXiv:0802.2591). Team member Scott Papp (now at Caltech) told physicsworld.com that it is “plausible that phase-separated BECs could be used for force detection”. He added that the Colorado team has already shown that such BECs are sensitive to external forces such as gravity.

However, he also observed that realizing Bhongale and Timmermans’s design in the laboratory would be “difficult, but not impossible”.

Willis Lamb: 1913-2008

Willis Lamb, who won the 1955 Nobel Prize in Physics “for his discoveries concerning the fine structure of the hydrogen spectrum”, died last week at the age of 94.

In 1947, Lamb discovered the famous “shift” in the hydrogen spectrum that bears his name. The Lamb shift provided important experimental evidence for the then emerging theory of quantum electrodynamics (QED).

Lamb was born on 12 July, 1913 in Los Angeles, California and like many physicists of his generation, he worked on radar technology during the Second World War. After the war, he turned his microwave expertise to the study of the hydrogen atom.

While working at Columbia University in New York, Lamb found that the 2S1/2 electron energy level in hydrogen was slightly higher than the 2P1/2 energy level. This shift was not predicted by relativistic quantum mechanics, which had been used two decades earlier by Paul Dirac to explain the fine structure of the hydrogen atom.

Instead, the Lamb shift provided crucial evidence for the new theory of QED, which describes the interactions between charged particles in terms of the exchange of photons. Ten years later, Julian Swinger and Richard Feynman of the US and Sin-Itiro Tomonaga of Japan shared the 1965 Nobel Prize in Physics for their work on QED — and, in particular, its use in explaining the Lamb shift.

Lamb spent his formative years in California and in 1938 he gained PhD in nuclear physics from the University of California at Berkeley under the supervision of Robert Oppenheimer. Lamb then joined the physics department at Columbia University, where he did his Nobel-prize work at the Columbia Radiation Laboratory.

Lamb shared the 1955 Nobel Prize with his Columbia colleague Polykarp Kusch, who won for his independent work on using microwave techniques to determine the magnetic moment of the electron.

Lamb left Columbia in 1951 for Stanford University in California and over the next 22 years he held positions at Harvard, Yale and Oxford. In 1974, Lamb joined the School of Optical Sciences at the University of Arizona, where he remained until his retirement in 2002.

Lamb died on 15 May, 2008 in Tucson, Arizona and is survived by his wife Elsie and brother Perry.

Information ‘not lost’ in black holes

The “information paradox” surrounding black holes has sucked in many noteworthy physicists over the years. For more than three decades Stephen Hawking of Cambridge University in the UK insisted that any information associated with particles swallowed by black holes is forever lost, despite this going against the rule of quantum mechanics that information cannot be destroyed.

When four years ago Hawking famously made a volte-face — that information can be recovered after all — not everyone was convinced. “The general view is that [Hawking’s] argument is not sufficiently detailed,” says Abhay Ashtekar at Penn State University in the US.

Now, Ashekar and colleagues at Penn State claim to have more reliable mechanism that can preserve information dragged into the shadows of black holes.

Not so black after all

The information paradox first surfaced in the early 1970s when Hawking, building on earlier work by Jacob Bekenstein at the Hebrew University of Jerusalem, suggested that black holes are not totally black. He showed that particle–antiparticle pairs generated at a black hole’s periphery, known as its event horizon, would be separated. One would fall into the black hole while the other would escape, making the black hole appear as a radiating body.

Although Hawking famously conceded his bet, he left his original argument for information loss orphaned but alive Steven Giddings, University of California in Santa Barbara

Quantum entanglement demands that the trapped particle would have negative energy and, because of Einstein’s mass-energy equivalence E = mc2, negative mass. With each successive negative-energy particle the black hole would therefore steadily lose mass or “evaporate”. Hawking argued that even after a black hole has totally evaporated it would leave behind its central, infinitely dense point known as the singularity, in which information would be lost forever.

The significance of the information paradox came to a head in 1997 when Hawking, together with colleague Kip Thorne at Caltech, US, put this argument forward as a bet with John Preskill, also at Caltech. Preskill believed that, in accordance with quantum mechanics, information loss is impossible because it prevents the equations governing the process from being reversible. But in 2004 Hawking conceded the bet, saying he now believed that information is returned, although in a disguised state.

Sticking points

Hawking’s revised stance failed to sway other theorists. Aside from the fact that his new theory was based on mathematics that is not obviously relevant to physical space–time, it did not directly address his original argument about the singularity.

The Penn State group, which includes Ashtekar as well as Victor Taveras and Madhavan Varadarajan, claims to have overturned this argument by performing calculations of a black hole model in two dimensions: one space and one time. “In my opinion this remains a very important question to settle,” says Steven Giddings at the University of California in Santa Barbara. “Although Hawking famously conceded his bet, at the time he left his original argument for information loss orphaned but alive. This new work appears to have found improved control over the calculations.”

The advantage of working in two dimensions is that it has allowed Ashtekar’s group to write down exact quantum equations governing the gravity at a black hole, which they can evaluate using two approximations. The first is a “bootstrapping” process, essentially reaching a solution for the equations using a series of better-informed guesses. “Bootstrapping serves to demonstrate that quantum geometry can be perfectly regular even when the classical geometry acquires singularities,” explains Ashtekar.

Second is a “mean field” approximation that finds a solution for the region away from the centre of the black hole. It was using this approximation that Ashtekar’s group discovered the inner region approaching infinite density is much larger than previously thought using classical arguments — large enough to allow the recovery of information (Phys. Rev. Lett. in publication; preprint at arXiv:0801.1811).

It strongly suggests that black hole evaporation does not destroy information Seth Lloyd, Massachusetts Institute of Technology

‘Not convinced’

Preskill, who accepted Hawking’s 2004 concession even though he was doubtful of his theory, is also “not convinced” of the Penn State research — though he notes that he has not yet studied it carefully. “I thought we made a pretty strong case back in 1994 that models of this type exhibit information loss…I don’t see how the observations by Ashtekar et al. change that conclusion, but I may be missing something.” Thorne, who was also dubious of Hawking’s concession at the time, did not want to comment because he is not familiar with this particular field of research.

Other theorists think Ashtekar’s group have made an important development, though they add that the debate is still not over. “After some extended discussions with Abhay, I am not yet convinced that they have shown the information comes out,” says Giddings.

“It is indeed very interesting,” says Seth Lloyd of the Massachusetts Institute of Technology. “It strongly suggests, although it does not prove, that black hole evaporation in one-plus-one dimensions does not destroy information: all information escapes as the black hole evaporates…[but] it is not clear that the derivation would work in three-plus-one dimensions.”

Copyright © 2025 by IOP Publishing Ltd and individual contributors