Skip to main content

Single atoms go transparent

Making an opaque material transparent might seem like magic. But for well over a decade, physicists have been able to do just that in atomic gases using the phenomenon of electromagnetically induced transparency (EIT). Now, however, this seemingly magical effect has been observed in single atoms – and in “artificial” atoms consisting of a superconducting loop – for the first time.

EIT occurs in certain media that do not usually transmit light at a certain wavelength, but can be made transparent by applying a second beam of light at a slightly different wavelength. EIT has famously been used to slow down pulses of light so they are effectively “stored” in a medium – the current record being a pulse stored in an ultracold cloud of atoms for over one second. This ability to store light in this way could find application in optical communication systems or even light-based quantum computers.

EIT requires the atoms to have a specific configuration of three energy levels in which transitions between one specific pair of levels are forbidden. Now Abdufarrukh Abdumalikov and colleagues at the RIKEN Advanced Science Institute near Tokyo and the University of Loughborough in the UK have created an artificial atom with the necessary energy levels using a superconducting loop about 1 μm in diameter.

Easy as 1, 2, 3

The loop is punctuated by four Josephson junctions – thin insulating layers across which the superconducting electrons must tunnel. A magnetic field is applied to the loop, which causes a persistent current to flow. The current is quantized into discrete values – with different energies. Transitions between the energy levels are made via the absorption or emission of microwaves, which are guided to and from the artificial atom using a tiny wave guide.

The team focused on the three lowest energy levels (1, 2 and 3 in ascending order), which are arranged such that transitions between levels 1 and 3 are forbidden – but 1–2 and 2–3 are allowed. When “probe” microwaves with energy equal to the 1–2 transition are fired at the artificial atom, they cause the system to oscillate rapidly between those two levels. Known as a “Rabi oscillation”, it results in most of the microwaves being reflected from the atom.

Towards switchable mirrors

EIT is achieved by firing a second beam of “control” microwaves with energy at the 2–3 transition at the atom. This causes a second Rabi oscillation. The two oscillations interfere destructively, causing the probe light to be transmitted.

Abdumalikov and colleagues put their device to the test by measuring the transmission of probe microwaves through the artificial atom while decreasing the intensity of the control beam. They found that the probe transmission dropped by 96% when the control beam was reduced to zero.

The team believes that the device could find use as a switchable mirror for microwaves – and if extended to operate optical wavelengths, it could find use in photonic quantum information processing systems.

‘A great step forward’

Suzanne Gildert of Birmingham University described the work as “a great step forward” in the development of quantum information technology. “I am of the opinion that superconducting devices are one of the most promising (if not the only) path to scalable quantum information processors,” she told physicsworld.com. “This development demonstrates a potentially new mechanism of control in quantum optics circuitry which is compatible with some existing superconducting device designs (Josephson-junction based qubits).”

Hans Mooij at Delft University of Technology added, “This is a beautiful experiment, the results are elegant and clear. I think it can be a very important development as it allows the fast control of microwave signals on the chip.”

The research is reported in the preprint arXiv:1004.2306 and will be published in Physical Review Letters.

Single-atom EIT

Meanwhile, Martin Mücke and colleagues at the Max Planck Institute for Quantum Optics in Garching, Germany, have observed EIT in just one atom of rubidium. The atom was isolated in a magneto-optical trap using a combination of laser light and magnetic fields. The team focused on transitions between three hyperfine atomic states, which involve the emission or absorption of light and were chosen because one transition is forbidden.

When both the probe and control light were shone on the atom, the probe light was transmitted through the trap. However, when the control beam was switched off Mücke and colleagues saw a 20% drop in transmission. The team then investigated the effects of adding additional atoms to the cavity and found eventually a huge, 60% fall in transmission when seven atoms were used.

This work is described in the preprint arXiv:1004.2442.

Flight of the fruit fly

chair.jpg
Aerial trickery revealed in the shadow play

By James Dacey

The fruit fly, like many winged insects, has to work very hard to stay in the air given the tiny size of its wings. To generate the vertical force necessary to maintain flight, it must beat its wings hundreds of times every second. But at these speeds and torques, how on earth does the fragile fruit fly manage to control its flight to make those sharp turns and pull off those difficult aerial manoeuvres?

Well, the answer, according to a group of researchers at Cornell University in the US, lies in a gentle, passive movement of the fly’s wings.

The Cornell research team have honed in on the turning kinematics of fruit flies by filming these insects as they fly around within a confined space. They capture the motion using three synchronized cameras, focused along orthogonal axes – the x, y and z – to capture 8000 frames per second or about 35 frames for each wing beat.

Then comes the clever bit – converting these 2D snapshots into a full 3D reconstruction of the insect’s flight motion. This was achieved using a technique known as Hull Reconstruction Motion Tracking (HRMT), which merged three separate images, one from each camera, at distinct time steps. Attila Bergou, a member of the team, says that this was like painting three silhouettes from the three cameras onto the faces of a box. “The volume you get by using a cookie cutter to cut out the shadows and peeling all but the centre away is the visual hull,” he explains.

What they found is that the fly does something very smart. By allowing aspects of their wing motion to be passively dictated by the aerodynamic and inertial forces, they end up being able to control their flight through the air in a very simple and elegant manner. Bergou says that the fly does this without “thinking”, comparing the motion to the wiggle of a boat’s oar as it cuts through the water.

Bergou believes that the manoeuvrability and efficiency of flapping flight at small length scales will be of interest to aircraft engineers. “There is a large amount of interest in the development of micro-air vehicles that use such flapping strokes to fly,” he says.

This research is documented in a new paper in Physical Review Letters.

The day Einstein died

By Michael Banks

It is amazing how far a bottle of scotch can take you.

On 18 April 1955 photographer Ralph Morse got a call early in the morning from an editor at LIFE magazine to go to Princeton.

Morse, who worked for the magazine for decades, was told to cover the news that Albert Einstein had died of heart failure at Princeton Hospital.

Armed with his camera and a case of scotch, Morse travelled 90 miles from his home to Princeton, New Jersey. But instead of going straight to the hospital, which was flooded with reporters, Morse drove to Einstein’s office at Institute for Advanced Study in Princeton.

After offering a superintendent some scotch, Morse had access to Einstein’s office just as the physicist left it, where he took a now iconic image of his desk.

Last week, LIFE magazine published 10 other previously unseen photographs taken on the day. Most of the pictures are taken at the service, which was held on the afternoon of 18 April at Ewing Crematorium in Trenton, 20 km south of Princeton.

Morse located the service after workers at a cemetery in Princeton told him where it was being held – all with the help of a little scotch.

Dwarf planets are not space potatoes

In 2006 there was an outcry from many astronomers when Pluto was stripped of its planetary status and renamed as a dwarf planet. The aggrieved feel that the distinction is rather arbitrary, especially as it is difficult to distinguish dwarf planets from other bodies in the solar system. Now, however, a pair of researchers are offering a more rigid definition by calculating the lower limit on the size of dwarf planets for the first time.

The IAU’s definition of a planet is a celestial body that meets three strict criteria. First, it must be in orbit around the Sun. Second, it must have sufficient mass that its self-gravity overcomes other forces in the rigid body so that it assumes a nearly round shape. Finally, it must also have cleared the neighbourhood around its orbit by drawing in other space material with its gravitational field.

A dwarf planet meets all of these criteria except the last. Indeed, this was the downfall of Pluto, whose orbital path overlaps with other objects such as asteroids and the planet Neptune.

This categorization, however, does not sit happily with many astronomers who point out that Neptune also fails the test because of its overlap with Pluto. Furthermore, there is no agreement on how small dwarf planets can be, making it difficult to estimate the number of dwarf planets in the solar system.

Potato radius

In this latest research, Charles Lineweaver and Marc Norman at the Australian National University address this issue by deriving from first principles a lower limit on the radius of protoplanets. They calculate, using their new equation, that asteroids must have a radius of at least 300 km and icy moons must have a radius above 200 km for self-gravity to dominate and create spherical bodies. Below this radius, a balance between gravitational and electronic forces can create all sorts of shapes referred to as rounded potatoes.

The new categorization increases the number of bodies orbiting beyond those that should now be classified as “dwarf planets”. Previously, astronomers had known the size of a lot of these bodies, but not whether they were spherical. “Measuring the shape of objects as a function of size can help us determine how hot these objects were when their shapes were set early in their formation,” says Lineweaver.

This research is published on the arXiv preprint server.

Physicists find a particle accelerator in the sky

The first evidence that thunderstorms can function as huge natural particle accelerators has been collected by an international team of researchers.

In a presentation at a meeting of the Royal Astronomical Society in Glasgow last week, Martin Füllekrug of Bath University described how the team detected radio waves coinciding with the appearance of “sprites” – glowing orbs that occasionally flicker into existence above thunderstorms. The radio waves suggest the sprites can accelerate nearby electrons, creating a beam with the same power as a small nuclear power plant.

“The discovery of the particle accelerator allows [one] to apply the knowledge gained in particle physics to the real world, and put the expected consequences to experimental testing,” Füllekrug told physicsworld.com.

An old idea

The idea of natural particle accelerators existing just kilometres above our heads first came in 1925, when the UK physicist and Nobel laureate Charles Wilson investigated the effects of a thundercloud’s electric field. Wilson claimed that the electric field would cause an electrical breakdown of the Earth’s atmosphere above the cloud, leading to transient phenomena such as sprites.

These sprites, physicists suggested, would do more than just light up the sky. As highly energetic particles or “cosmic rays” from space bombard our atmosphere, they strip air molecules of their outer electrons. In the presence of a sprite’s electric field, these electrons could be forced upward in a narrow beam from the troposphere to near-Earth space. Moreover, the changing electron current would, via Maxwell’s equations, produce electromagnetic waves in the radio-frequency range.

In 1998 Füllekrug’s colleague Robert Roussel-Dupré of Los Alamos National Laboratory in New Mexico, US, used a supercomputer to simulate these radio waves. The simulations predicted they would come in pulses with a fairly flat spectrum – contrary to the electromagnetic spectrum of the lightning itself, which increases at lower frequencies.

Predictions confirmed

In 2008, while a group of European scientists timed the arrival of sprites from a mountain top in the French Pyrenées, Füllekrug was on the ground with a purpose-built radio-wave detector. The signals he detected coincided with the sprites and matched the characteristics of Roussel-Dupré’s predictions.

“It’s intriguing to see that nature creates particle accelerators just a few miles above our heads,” says Füllekrug, adding: “They provide a fascinating example of the interaction between the Earth and the wider universe.”

Füllekrug notes that he has no particular applications in mind for a sky-based particle accelerator, although he believes there may be wider implications for science. Researchers have many questions about the middle atmosphere because it is so difficult to set up observational platforms there. But by employing what physicists have learned about how such electron beams interact with matter, researchers could use this phenomenon to study this part of the atmosphere.

Indeed, we might be hearing a lot more about natural particle accelerators in the near future. The IBUKI satellite from Japan could soon be looking at the movement of charged particles in the atmosphere. In the next few years several missions – including CHIBIS from Russia and TARANIS from France – should provide more data about these accelerators.

The research is published in the Journal of Geophysical Research.

Where has all the heat gone?

Two leading climate scientists have urged their colleagues to find the growing amount of “missing energy” that seems to be eluding climate sensors.

In a commentary in today’s issue of Science, Kevin Trenberth and John Fasullo of the National Center for Atmospheric Research in Colorado, US, identify a large and growing amount of solar energy that appears to have been absorbed by the Earth – but has yet to turn up in terrestrial measurements.

Since 2001 scientists have used satellites to compare the amount of solar energy being absorbed by the Earth to the amount of infrared energy escaping from our planet. And just as predicted by the theory of manmade climate change, the amount of energy retained by the Earth has increased along with greenhouse-gas concentrations.

Losing energy since 2004

At first this extra energy seems to have boosted temperatures down here on Earth. Then something unexplained happened in about 2004 – and since then terrestrial measurements suggest that the planet is losing energy.

So are the satellites wrong? While Trenberth and Fasullo say that the satellites are not good enough to give accurate measurements of the net energy itself, they claim that the instruments are “sufficiently stable” to track changes in net energy, which are the important quantity.

Trenberth told physicsworld.com that the discrepancy probably lies in the environment’s largest heat reservoir. “I would say that the missing heat is mainly in the ocean,” he argues.

Much of our understanding of how the oceans absorb energy comes from over 3000 “Argo floats” that gather temperature data at depths of up to 2000 m. However, Trenberth says he thinks that “oceanographers are fairly new at processing this kind of data and are still learning how to do it right”. He also points out of that some of the floats deployed in the Atlantic have been problematic.

Lurking deep in the ocean

Despite the challenges involved in analysing the Argo data, Trenberth points to a recent study by Karina von Schuckmann and colleagues at CNRS in Plouzané, France that looks at Argo data gathered in 2003–2008. Unlike most calculations of energy changes in the oceans – which consider temperatures down to 700 m – the CNRS team looked all the way down to 2000 m. At these greater depths they appear to have found a significant chunk of the missing heat.

Trenberth believes that it is crucial to understand when this energy will return to the upper ocean, where it would have a significant effect on climate. Scientists already know the Southern Oscillation involves the absorption of solar energy by the Pacific Ocean during “La Niña” years and its release into the atmosphere during “El Niño” years – leading to significant changes in weather patterns in the Americas.

An El Niño began in 2009 and looks set to continue in 2010. Trenberth believes that it might result in much of the missing energy resurfacing – but adds that current data gathering and analysis techniques mean that it could be a year or two before we know. “One can argue that we should develop a system to do this in closer to real time as part of the new climate services,” he said.

Satellite image of ash cloud from Icelandic volcano

Update: Here is the latest image from the European Space Agency’s Envisat satellite, taken yesterday afternoon. A plume of brownish-grey ash from the Eyjafjallajökull volcano can be seen leaving Iceland in a roughly south-easterly direction.

(Edited 20 April 2010 – the original story follows below.)

chair.jpg
New satellite image of ash spewing from Iceland’s volcano.
(Image acquired 19 April 2010, 1:45 PM) Credit: ESA

chair.jpg
Volcanic ash cloud as viewed from space.
(Image acquired 15 April 2010, 12:25 PM) Credit: ESA

By Louise Mayor

This image, taken yesterday, shows the extent of the volcanic ash that has been spewing out of Iceland’s Eyjafjallajökull volcano since Wednesday. The ash, which contains tiny particles of rock and glass, can be seen as a grey streak in the upper half of the image, being swept by winds high in the atmosphere towards the rest of western Europe.

In the bottom-right of the image you can just about make out the Republic of Ireland, and the UK where all but a few individually permitted flights have been grounded for a second day running as a result of the ash cloud.

The picture was acquired by the Medium Resolution Imaging Spectrometer (MERIS) on board the European Space Agency’s Envisat satellite. Launched in 2002, the satellite’s primary purpose is to image the colour of the Earth’s oceans using 15 spectral bands over the 390–1040 nm range. These images reveal characteristics of water such as its concentration of chlorophyll, which can then be used to understand the role of our oceans in the carbon cycle.

Life in the new energy regime

chair.jpg
Physicists chatter excitedly at CERN (Drawing: Georges Boixader)

By James Dacey

A couple of weeks ago, I went to CERN to witness the big moment when the first collisions of 7 TeV occurred within the Large Hadron Collider. The event smashed yet another world record and marked the beginning of the LHC’s physics programme, 18 months after the colossal machine was initially switched on.

Amidst the excitement, I carried out a series of quick-fire interviews with scientists at the four experiments to find out their hopes and aspirations for the coming months.

Not surprisingly, there was a lot of speculation at ATLAS and CMS, the two largest detector experiments, about the hunt for the Higgs boson – the particle predicted by the Standard Model to give mass to substance in the universe.

Reactions were equally excited at the LHCb and ALICE experiments where physicists were talking freely about some quite frankly mind-boggling questions that they are about to tackle. These will include: how the universe evolved in its first ten-millionths of a second; and why are we not surrounded by antimatter suns and antimatter galaxies.

In addition to the science, I also found out what it’s like to work at CERN: the conditions; how scientists deal with the competition between collaborations; and the benefits of working in a truly international project.

You can read highlights from these interviews in this newly published feature article.

Obama sets out Mars mission

US President Barack Obama has announced a new direction for NASA that will involve plans to send astronauts to an asteroid by 2025. Speaking yesterday at Florida’s Kennedy Space Center, the launching spot for US manned spaceflights, Obama also called for a new “heavy-lift” rocket design to take astronauts on a mission to orbit Mars by the mid-2030s that will “eventually” be used to take humans to the Martian surface.

In February the Obama administration announced that it was cancelling the Constellation programme – first proposed by George W Bush in 2004 – to develop new rockets that would allow astronauts to return to the Moon by 2020. Critics argued that the decision would surrender US leadership in space and extinguish the country’s vision of exploration. Neil Armstrong, the first man to walk on the Moon, called the decision “devastating” and a waste of the $10bn investment in Constellation and the years of research and development put into the project.

The new plan involves retaining some of the Constellation’s technology and NASA will now start to adapt its Orion crew capsule as a kind of “space lifeboat” that will reduce reliance on foreign vehicles to rescue astronauts from the International Space Station (ISS).

Obama also announced that NASA will now invest more than $3bn in research on an advanced heavy-lift rocket for missions to a near-Earth asteroid and Mars, with a design expected to be complete “no later” than 2015. The rocket, which should be complete a few years after, could be used for a trip to a near-Earth asteroid and then in a separate mission to Mars.

Obama noted that he expects to still “be around” by the time US astronauts land on the red planet. “We will actually reach space faster and more often under this new plan, in ways that will help us improve our technological capacity and lower our costs,” Obama said yesterday. “Nobody is more committed to human exploration of space than I am. But we’ve got to do it in a smart way.”

The new plans also include modernizing Kennedy Space Center as well as upgrading its launch capabilities. That process should create more than 2500 extra jobs in the region, compensating in part for job losses that will occur due to the planned end of the space shuttle programme this year. Obama also called for NASA and other government agencies to develop a plan for economic growth and job creation in the region by 15 August.

In his speech, Obama also laid out where an additional $6bn over the next five years for NASA will be spent. First announced in Obama’s 2011 budget request to Congress, this new money will go on increasing Earth-based observations, extending the life of the ISS by more than five years as well as working with private companies to make getting to space easier and more affordable.

Astronomers develop new planet-hunting tool

Astronomers in the US have invented a new technique to take direct images of planets orbiting distant stars. The breakthrough means that it should now be feasible to see such “exoplanets” with much smaller telescopes than is currently possible. Although the technique has not yet been used to find any new expolanets, the researchers have confirmed the existence of three known planets orbiting a distant star.

The first planet orbiting another star was found in 1995 and since then astronomers have gone on to discover more than 450 such bodies. Most exoplanets have been detected indirectly by observing their effect on the brightness or motion of their parent stars.

However, the best way of determining the chemical composition of an exoplanet, which could tell us whether it harbours life, is to analyse the spectrum of light travelling directly from the exoplanet to Earth. The problem is that direct detection is very difficult using smaller ground-based telescopes. So far direct images have only been possible with the Hubble Space Telescope and several very large ground-based telescopes.

Size matters

Now, however, Gene Serabyn and colleagues at Jet Propulsion Laboratory near Los Angeles have come up with a way of imaging exoplanets using much smaller telescopes. Indeed, their measurement with a 1.5 m diameter instrument is as good as that from a much larger, 10 m telescope.

Serabyn’s team began by sharpening an image of a star using adaptive optics to remove most of the distortion that occurs when starlight passes through the Earth’s atmosphere. The resulting image consists of a diffraction pattern comprising a bright central disc surrounded by concentric dark and bright circles – an unavoidable consequence of light passing through the aperture of the telescope.

The problem is that if the star has an exoplanet, its image will be much fainter and can be obscured by this diffraction pattern. Indeed, astronomers are only able to resolve exoplanets orbiting beyond a certain distance from the star because the brightness of the pattern decreases rapidly from the central maximum. This distance is inversely proportional to the size of the telescope’s aperture, which is why larger instruments have to be used.

Diffraction free

Serabyn and colleagues got around this problem by using a “vortex coronagraph”, which blocks out the light from a star and removes much of the diffraction pattern from the image. Pioneered by team-member Dimitri Mawet and others, the vortex coronagraph is a small glass “phase plate” that applies a spiral phase-shift to light passing through it. The starlight is focused at the very centre of the plate, meaning that the starlight emerges from the other side at a relatively large angle with respect to the axis of the telescope (see diagram).

But because the exoplanet is at a different position than the star, its light is not focused at the very centre of the plate and so emerges at a much smaller angle. The starlight is then removed using a blocker with a central hole, through which the light from the exoplanet can pass.

The team tested their scheme using the 5.1 m diameter Hale telescope on Palomar Mountain in California. Instead of using all the light gathered by the telescope, the team used a reduced aperture of 1.5 m because this allowed the adaptive-optics system to deliver the best possible image.

The team aimed the telescope towards the star HR 8799, which is known to have three exoplanets that had been imaged directly in 2008 by Christian Marois and colleagues at the Herzberg Institute of Astrophysics in Canada. Marois used a 10 m telescope at the Keck observatory in Hawaii and was able to see to within 440 milliarcseconds of the star.

Using their 1.5 m setup, Serabyn and colleagues could also see all three exoplanets – and had a clear view to within 300 milliarcseconds of the star. Marois, who was not involved in the Palomar observation, described the result as “remarkable”, adding that “we can expect great discoveries to be made when we’ll have a similar optimal setup working on the full 8–10 m aperture in the next few years”.

More telescopes

One advantage of the technique is that it could allow many more telescopes to obtain direct images of exoplanets. Indeed, Serabyn believes that it could be used to enhance 50–100 existing instruments. In addition, space telescopes designed to find exoplanets could be made smaller and hence cheaper and easier to deploy.

“Our goal is to someday take snapshots of solar systems, showing all of the planets in their orbits around the star, and to do spectroscopy on all of them,” Serabyn told physicsworld.com. He adds that the team is also talking to several groups developing next-generation adaptive optics about integrating vortex coronagraphs.

The team will return to Palomar this summer, where it will carry out a survey of nearby stars in search of exoplanets.

The work is described in Nature 464 1018.

Copyright © 2026 by IOP Publishing Ltd and individual contributors