Skip to main content

From ray-gun to Blu-ray

 

There is one particular scene in H G Wells’ 1898 tale The War of the Worlds that, if only I had remembered it, could have helped me to avoid a bad moment in my laser lab in 1980. In the story – published long before lasers came along in 1960 – the Martians wreak destruction on earthlings with a ray that the protagonist calls an “invisible, inevitable sword of heat”, projected as if an “intensely heated finger were drawn…between me and the Martians”. In all but name, Wells was describing an infrared laser emitting an invisible straight-line beam – the same type of laser that, decades later in my lab, burned through a favourite shirt and started on my arm.

Wells’ bold prediction of a destructive beam weapon preceded many others in science fiction. From the 1920s and 1930s, Buck Rogers and Flash Gordon wielded eye-catching art-deco ray-guns in their space adventures as shown in comics and in films. In 1951 the powerful robot Gort projected a ray that neatly disposed of threatening weapons in the film The Day the Earth Stood Still. Such appearances established laser-like devices in the popular mind even before they were invented. But by the time the evil Empire in Star Wars Episode IV: A New Hope (1977) used its Death Star laser to destroy an entire planet, lasers were a thing of fact, not just fiction. Lasers were changing how we live, sometimes in ways so dramatic that one might ask, which is the truth and which the fiction?

Like the fictional science, the real physics behind lasers has its own long history. One essential starting point is 1917, when Einstein, following his brilliant successes with relativity and the theory of the photon, established the idea of stimulated emission, in which a photon induces an excited atom to emit an identical photon. Almost four decades later, in the 1950s, the US physicist Charles Townes used this phenomenon to produce powerful microwaves from a molecular medium held in a cavity. He summarized the basic process – microwave amplification by stimulated emission of radiation – in the acronym “maser”.

After Townes and his colleague Arthur Schawlow proposed a similar scheme for visible light, Theodore Maiman, of the Hughes Research Laboratories in California, made it work. In 1960 he amplified red light within a solid ruby rod to make the first laser. Its name was coined by Gordon Gould, a graduate student working at Columbia University, who took the word “maser” and replaced “microwave” with “light”, and later received patent rights for his own contributions to laser science.

Following Maiman’s demonstration of the first laser there was much excitement and enthusiasm in the field, and the ruby laser was soon followed by the helium neon or HeNe laser, invented at Bell Laboratories in 1960. Capable of operating as a small, low-power unit, it produced a steady, bright-red emission at 633 nm. However, an even handier type was discovered two years later when a research group at General Electric saw laser action from an electrical diode made of the semiconductor gallium arsenide. That first laser diode has since mushroomed into a versatile family of small devices that covers a wide range of wavelengths and powers. The diode laser quickly became the most prevalent type of laser, and still is to this day – according to a recent market survey, 733 million of them were sold in 2004.

Better living through lasers

As various types of laser became available, and different uses for them were developed, these devices entered our lives to an extraordinary extent. While Maiman was dismayed that his invention was immediately called a “death ray” in a sensationalist newspaper headline, lasers powerful enough to be used as weapons would not be seen for another 20 years. Indeed, the most widespread versions are compact units typically producing mere milliwatts.

A decade and a half after their invention, HeNe lasers, and then diode lasers, would become the basis of bar-code scanning – the computerized registration of the black and white pattern that identifies a product according to its universal product code (UPC). The idea of automating such data for use in sales and inventory originated in the 1930s, but it was not until 1974 that the first in-service laser scanning of an item with a UPC symbol – a pack of Wrigley’s chewing gum – occurred at a supermarket checkout counter in Ohio. Now used globally in dozens of industries, bar codes are scanned billions of times daily and are claimed to save billions of dollars a year for consumers, retailers and manufacturers alike.

Lasers would also come to dominate the way in which we communicate. They now connect many millions of computers around the world by flashing binary bits into networks of pure-glass optical fibre at rates of terabytes per second. Telephone companies began installing optical-fibre infrastructure in the late 1970s and the first transatlantic fibre-optic cable began operating between the US and Europe in 1988, with tens of thousands of kilometres of undersea fibre-optic cabling now in existence worldwide. This global web is activated by laser diodes, which deliver light into fibres with core diameters of a few micrometres at wavelengths that are barely attenuated over long distances. In this role, lasers have become integral to our interconnected world.

As lasers grew in importance, their fictional versions kept pace with – and even enhanced – the reality. Only four years after the laser was invented, the film Goldfinger (1964) featured a memorable scene that had every man in the audience squirming: Sean Connery as James Bond is tied to a solid gold table along which a laser beam moves, vaporizing the gold in its path and heading inexorably toward Bond’s crotch – though as usual, Bond emerges unscathed.

That laser projected red light to add visual drama, but its ability to cut metal foretold the invisible infrared beam of the powerful carbon-dioxide (CO2) laser – the type that once ruined my shirt. Invented in 1964, CO2 lasers emitting hundreds of watts in continuous operation were introduced as industrial cutting tools in the 1970s. Now, kilowatt versions are available for uses such as “remote welding” in the automobile industry, where a laser beam directed by steerable optics can rapidly complete multiple metal spot welds. High-power lasers are suitable for other varied industrial tasks, and even for shelling nuts.

Digital media

Aside from the helpful and practical uses of lasers, what have they done to entertain us? For one thing, lasers can precisely control light waves, allowing sound waves to be recorded as tiny markings in digital format and the sound to be played back with great fidelity. In the late 1970s, Sony and Philips began developing music digitally encoded on shiny plastic “compact discs” (CDs) 12 cm in diameter. The digital bits were represented by micrometre-sized pits etched into the plastic and scanned for playback by a laser diode in a CD player. In retrospect, this new technology deserved to be launched with its own musical fanfare, but the first CD released, in 1982, was the commercial album 52nd Street by rock artist Billy Joel.

In the mid-1990s the CD’s capacity of 74 minutes of music was greatly extended via digital versatile discs or digital video discs (DVDs) that can hold an entire feature-length film. In 2009 Blu-ray discs (BDs) appeared as a new standard that can hold up to 50 gigabytes, which is sufficient to store a film at exceptionally high resolution. The difference between these formats is the laser wavelengths used to write and read them – 780 nm for CDs, 650 nm for DVDs and 405 nm for BDs. The shorter wavelengths give smaller diffraction-limited laser spots, which allow more data to be fitted into a given space.

Although the download revolution has led to a decline in CD sales – 27% of music revenue last year was from digital downloads – lasers remain essential to our entertainment. They carry music, films and everything that streams over or can be downloaded via the Internet and telecoms channels, depositing them into our computers, smart phones and other digital devices.

Death rays…

Among the films that you might choose to download over the Internet are some in which lasers are portrayed as destructive devices, encouraging negative connotations. In the film Real Genius (1985), a scientist co-opts two brilliant young students to develop an airborne laser assassination weapon for the military and the CIA. The students avenge themselves by sabotaging the laser to heat a huge vat of popcorn, producing a tsunami of popped kernels that bursts open the scientist’s house. The film RoboCop (1987) shows a news report that a malfunctioning US laser in orbit around the Earth has wiped out part of southern California. This was a satirical response to the idea of laser weapons in space, a hotly pursued dream for then US President Ronald Reagan.

The US military was thinking about laser weapons well before high-power industrial CO2 lasers were melting metal. As the Cold War raised fears of all-out conflict with the Soviet Union, the potential for a new hi-tech weapon stimulated the Pentagon to fund laser research even before Maiman’s result. But it was difficult to generate enough beam power within a reasonably sized device – early CO2 lasers with kilowatt outputs were too unwieldy for the battlefield. Eventually, in 1980, the Mid-Infrared Advanced Chemical Laser reached pulsed powers of megawatts, but was still a massive device. Even worse, absorption and other atmospheric effects made its beam ineffective by the time it reached its target.

That would not be a concern, however, for lasers fired in space to destroy nuclear-tipped intercontinental ballistic missiles (ICBMs) before they re-entered the atmosphere. Development of suitably powerful lasers such as those emitting X-rays became part of the multibillion-dollar anti-ICBM Strategic Defense Initiative (SDI) proposed by Reagan in 1983. Known to the general public and even to scientists and the government as “Star Wars” after the film, the scheme had an undeniably science-fiction flavour. But the US weaponization of space was never realized – by the 1990s technical difficulties and the fall of the Soviet Union had turned laser-weapons development elsewhere. Now it is mostly directed towards smaller weapons such as airborne lasers that have a range of hundreds of kilometres. …and life rays

While the morality associated with weapons may be debatable, lasers are used in many other areas that are undeniably good, such as medicine. The first medical use of a laser was in 1961, when doctors at Columbia University Medical Center in New York destroyed a tumour on a patient’s retina with a ruby laser. Because a laser beam can enter the eye without injury, ophthalmology has benefited in particular from laser methods, but their versatility has also led to laser diagnosis and treatment in other medical areas.

Using CO2 and other types of lasers with varied wavelengths, power levels and pulse rates, doctors can precisely vaporize tissue, and can also cut tissue while simultaneously cauterizing it to reduce surgical trauma. One example of medical use is LASIK (laser-assisted in situ keratomileusis) surgery in which a laser beam reshapes the cornea to correct faulty vision. By 2007, some 17 million people worldwide had undergone the procedure.

In dermatology, lasers are routinely used to treat benign and malignant skin tumours, and also to provide cosmetic improvements such as removing birthmarks or unwanted tattoos. Other medical uses are as diverse as treating inaccessible brain tumours with laser light guided by a fibre-optic cable, reconstructing damaged or obstructed fallopian tubes and treating herniated discs to relieve lower-back pain, a procedure carried out on 500,000 patients per year in the US.

Yet another noble aim of using lasers is in basic and applied research. One notable example is the National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory in California. NIF’s 192 ultraviolet laser beams, housed in a stadium-sized, 10-storey building, are designed to deliver a brief laser pulse measured in hundreds of terawatts into a millimetre-scale, deutrium-filled pellet. This is expected to create conditions like those inside a star or a nuclear explosion, allowing the study of both astrophysical processes and nuclear weapons.

A more widely publicized goal is to induce the hydrogen nuclei to fuse into helium, as happens inside the Sun, to produce an enormous energy output. After some 60 years of effort using varied approaches, scientists have yet to achieve fusion power that produces more energy than a power plant would need to operate. If laser fusion were to successfully provide this limitless, non-polluting energy source, that would more than justify the overruns that have brought the cost of NIF to $3.5bn. Although some critics consider laser fusion a long shot, recent work at NIF has realized some of its initial steps, increasing the odds for successful fusion.

Popular culture is also hopeful about the role of lasers in “green” power. Although the film Chain Reaction (1996) badly scrambles the science, it does show a laser releasing vast amounts of clean energy from the hydrogen in water. In Spider-Man 2 (2004), physicist Dr Octavius uses lasers to initiate hydrogen fusion that will supposedly help humanity; unfortunately, this is no advertisement for the benefits of fusion power, for the reaction runs wild and destroys his lab.

Lasers in high and not-so-high culture

Situated between the ultra-powerful lasers meant to excite fusion and the low-power units at checkout counters are lasers with mid-range powers that can provide highly visible applications in art and entertainment, as artists quickly realized. A major exhibit of laser art was held at the Cincinnati Museum of Art as early as 1969, and in 1971 a sculpture made of laser beams was part of the noted “Art and Technology” show at the Los Angeles County Museum of Art. In 1970 the well-known US artist Bruce Nauman presented “Making Faces”, a series of laser hologram self-portraits, at New York City’s Finch College Museum of Art.

Other artists followed suit in galleries and museums, but lasers have been most evident in larger venues. Beginning in the late 1960s, beam-scanning systems were invented that allowed laser beams to dynamically follow music and trace intricate patterns in space. This led to spectacular shows such as that at the Expo ’70 World’s Fair in Osaka, Japan, and those in planetariums. A favourite type featured “space” music, like that from Star Wars, accompanied by laser effects.

Rock concerts by Pink Floyd and other groups were also known for their laser shows, though these are now tightly regulated because of safety issues. But spectacular works of laser art continue to be mounted, for example the outdoor installations “Photon 999” (2001) and “Quantum Field X3” (2004) created at the Guggenheim Museum in Bilbao, Spain, by Japanese-born artist Hiro Yamagata, and the collaborative Hope Street Project, installed in 2008. This linked together two major cathedrals in Liverpool, UK, by intense laser beams – one highly visible green beam and also several invisible ones – that carried voices and generated ambient music to be heard at both sites.

After 50 years, striking laser displays can still evoke awe, and lasers still carry a science-fiction-ish aura, as demonstrated by hobbyists who fashion mock ray-guns from blue laser diodes. Unfortunately, the mystique also attaches itself to products such as the so-called quantum healing cold laser, whose grandiose title uses scientific jargon to impress would-be customers. Its maker, Scalar Wave Lasers, asserts that its 16 red and infrared laser diodes provide substantial health and rejuvenation benefits. Even the word “laser” has been appropriated to suggest speed or power, such as for the popular Laser class of small sailboats and the Chrysler and Plymouth Laser sports cars sold from the mid-1980s to the early 1990s.

The laser’s distinctive properties have also become enshrined in language. A search of the massive Lexis Nexis Academic research database (which encompasses thousands of newspapers, wire services, broadcast transcripts and other sources) covering the last two years yields nearly 400 references to phrases such as “laser-like focus” (appearing often enough to be a cliché), “laser-like precision”, “laser-like clarity” and, in a description of Russian Prime Minister Vladimir Putin expressing his displeasure with a particular businessman, “laser-like stare”.

Lasers have significantly influenced both daily life and science. With masers, they have been part of research, including work outside laser science itself, that has contributed to more than 10 Nobel prizes, beginning with the 1964 physics prize awarded to Charles Townes with Alexsandr Prokhorov and Nicolay Basov for their fundamental work on lasers. Other related Nobel-prize research includes the invention of holography and the creation of the first Bose–Einstein condensate, which was made by laser cooling a cloud of atoms to ultra-low temperatures. Also, in dozens of applications from Raman spectroscopy to adaptive optics for astronomical telescopes, lasers continually contribute to how science is done. They are also essential for research in such emerging fields as quantum entanglement and slow light.

It is a tribute to the scientific imagination of the laser pioneers, as well as to the literary imagination of writers such as H G Wells, that an old science-fiction idea has come so fully to life. But not even imaginative writers foresaw that Maiman’s invention would change the music business, create glowing art and operate in supermarkets across the globe. In the cultural impact of the laser, at least, truth really does outdo fiction.

Where next for the laser?

 

Astronomy

Claire Max is an astronomer and director of the Center for Adaptive Optics at the University of California, Santa Cruz, US

We all know that turbulence in the atmosphere makes stars twinkle, but it also severely blurs telescope images. Newton realized this back in 1730, when he wrote in Opticks that “the Air through which we look upon the Stars, is in perpetual Tremor…The only Remedy is a most serene and quiet Air, such as may perhaps be found on the tops of the highest Mountains above the Grosser Clouds”.

Theoretically, telescopes of ever larger diameter should be able to resolve ever smaller features within astronomical images. But the blurring due to atmospheric turbulence is so severe that even today’s largest ground-based telescopes (8–10 m in diameter) cannot see any more clearly than the small 20 cm backyard telescopes used by amateur astronomers on weekend evenings.

To remedy this situation, astronomers have turned to adaptive optics, a technology that measures snapshots of the atmospheric turbulence and then corrects for the resulting optical distortion using a special deformable mirror (usually a small mirror placed behind the main mirror of the telescope). Since the turbulence is changing all the time, these measurements and corrections must be done hundreds of times a second.

Early adaptive-optics systems used light from a bright star to measure the turbulence. However, most objects of astronomical interest do not have bright stars sufficiently close by, and hence the sky coverage of adaptive optics was quite limited. Then, in the early 1980s, astronomers realized that they could use a laser to make an artificial “star” as a substitute. This insight greatly extended the reach of adaptive-optics systems, since lasers could be pointed in the direction of any observing target in the sky. In the past five years these laser “guide star” adaptive-optics systems have really come to fruition, to the point where today every major 8–10 m telescope sports its own laser beacon.

The lasers used in these beacons have respectable average powers of about 5–15 W (a typical laser pointer, in contrast, has a power of less than 1 mW). Indeed, federal regulations require US observatories to turn them off whenever aircraft approach; observatories also file their observing plans in advance with Space Command to avoid hitting sensitive space assets.

Two types of laser predominate. The first is a custom-built system that is tuned to the yellow 589 nm resonance line of neutral sodium, creating a guide star at an altitude of about 95 km by exciting naturally occurring sodium atoms in the Earth’s upper atmosphere. The second type is tuned to green or even ultraviolet wavelengths and uses Rayleigh scattering of atmospheric molecules and particulates to create a guide star at an altitude of 15–20 km. The advantage of green and ultraviolet lasers is that they are commercially available, making them much cheaper to use than adaptive-optics systems that exploit yellow light.

Thanks to laser guide-star adaptive optics, today’s 8–10 m telescopes have better spatial resolution at infrared observing wavelengths than the Hubble Space Telescope, simply because of the larger size of their mirrors. Proposed giant telescopes, such as the Thirty Meter Telescope, the Giant Magellan Telescope and the European Extremely Large Telescope, all plan to use multiple laser guide stars at the same time. This will allow astronomers to measure, and correct for, the turbulence in the entire 3D column of air above the telescope. These multiple-laser systems will use the techniques of tomography – similar to those used in medical imaging’s computerized axial tomography (CAT) scans – to reconstruct the turbulence profile, enabling adaptive-optics correction over much wider fields of view than are available today.

Atomic physics

William D Philips is a physicist at the National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland, US. He shared the 1997 Nobel Prize for Physics with Claude Cohen-Tannoudji and Steven Chu for cooling and trapping atoms with laser light

In the early 1970s I was a young graduate student in Dan Kleppner’s research group at the Massachusetts Institute of Technology, working on a thesis that involved making precision measurements with a high-magnetic-field hydrogen maser (masers were the microwave precursors to the laser, which was originally called an “optical maser”). Kleppner and Norman Ramsey had invented a low-field version of the hydrogen maser more than a decade earlier, and the high-field version was producing unprecedentedly accurate measurements of magnetic moments in atoms – a sort of zenith for this kind of atomic physics.

But then came a new development that would change the direction of work in Dan’s lab, in my career and in atomic physics as a whole: the first continuous-wave, commercial, tunable dye lasers. The lasing medium in these devices was an organic dye that lased over a far wider range of wavelengths than, for example, a helium–neon laser, where the gain medium is an atomic gas. The introduction of these devices meant that even those who were not experts in laser design and construction could, by tuning a laser to an atomic-resonance transition, explore a new domain of atomic manipulation where coherent light was the key tool.

Eager to play with these new toys, I asked Dan to suggest an additional thesis experiment using lasers. He agreed, and suggested that I study collisions of optically excited sodium atoms. I began to build the apparatus. Other students and postdocs in the group started new experiments as well. Each issue of the research journals brought an increasing number of laser-related papers, and each conference saw reports of new laser experiments.

The excitement of that time was palpable. New ideas and new experiments popped up everywhere. In 1978 I was inspired by Dave Wineland’s demonstration of laser cooling of ions at the National Bureau of Standards (now NIST) in Boulder, Colorado, and by an idea from Art Ashkin at Bell Laboratories to slow and trap a beam of sodium atoms. Later that year, when I went to the bureau’s labs in Gaithersburg, Maryland, I took my thesis apparatus with me and began to work on laser cooling and trapping of sodium.

For me, the excitement I felt in the 1970s in Dan’s lab has never waned. New kinds of lasers with different wavelengths, ever shorter pulse lengths, ever higher powers, ever narrower spectral widths and ever better stability have all made possible new kinds of experiments. Laser cooling of many more types of atoms and ions, plus giant cold molecules, atomic clocks ticking at optical frequencies, and non-classical states of light are just some of the paths into which lasers have led atomic, molecular and optical (AMO) physics.

Moreover, lasers have allowed AMO physicists to realize Bose–Einstein condensation, to create optical lattices and to study ultracold Fermi gases. Each of these has deepened the connections between AMO and condensed-matter physics. It may be that lasers and cold atoms will help to elucidate some of the outstanding problems in condensed matter, such as the origins of high-temperature superconductivity and the nature of fractional-quantum-Hall states that are useful for quantum computing.

Ever since they first became available, lasers have invigorated and reinvigorated atomic physics, and the adventure shows no signs of stopping.

Biophysics

Steven Block is a biophysicist at Stanford University, California, US

Over the past 10 years, it has become possible to do experiments in biophysics that were previously just pipe dreams. For example, I work in a field known as single molecule biophysics. In this area, the challenge is to study the molecules of life – the proteins, nucleic acids, carbohydrates and other chemicals that make us up – literally one molecule at a time. This is not easy to do, because all biomolecules are much too small to be seen using, say, a conventional microscope. Nonetheless, we are finding that they can be manipulated and measured, and the techniques that are involved in doing this often require lasers.

One technique that my lab has helped to pioneer is known as “optical tweezers”. The idea behind optical tweezers is that you can use the radiation pressure supplied by an infrared laser beam to capture and manipulate small materials – including individual proteins and nucleic acids – and move them around under a microscope. To do this, we hook tiny microscopic beads up to molecules such as DNA. Then we can use optical tweezers and optical traps to “hold onto” these beads and exert very tiny, controlled forces on the DNA molecules.

The lasers that we use for this have some pretty extraordinary properties – they are not like the laser in a laser pointer or in your CD player. We need to be able to hold a laser beam stably in space to within the diameter of a hydrogen atom, or about 1 Å, for several seconds at a time. This is because the base pairs in a DNA molecule are only separated by about 3.5 Å, and one of the things we are interested in studying is how the enzyme RNA-polymerase, which “reads” the genetic code, moves as it climbs the DNA ladder one base pair at a time.

It is amazing that we can literally watch this happen, and it all depends on being able to shine laser light on the enzyme, scatter that light and measure displacements that are accurate down to an angstrom. We are constantly looking for lasers with higher power in single modes and better stability properties. Some of the new generation of diode lasers are now reaching the point where they can be used for these experiments, but for the most part they have not made it out of the lab yet. It will be very interesting once they do.

Defence

Jeff Hecht is a freelance science and technology writer based in Auburndale, Massachusetts, US, who has covered laser weapons since 1980

High-energy laser weapons – long the stuff of science fiction – have recently reached a turning point. But it is not the one you would expect if you saw news clips of a laser-armed Boeing 747 shooting down a target missile in February this year. Instead, the US military is planning to concentrate on thwarting attacks from short-range targets such as rockets, mortars and artillery shells.

Modern laser weaponry dates from about 1980, when the main goal was to develop high-energy lasers capable of destroying missiles launched hundreds or thousands of kilometres away. Indeed, US President Ronald Reagan’s “Star Wars” programme spent billions on plans for orbiting laser battle stations. But tough technology problems and the end of the Cold War changed the requirements. The result was the Airborne Laser (ABL): a Boeing 747 equipped with a megawatt chemical oxygen–iodine laser and designed to shoot down missiles launched by a “rogue state”.

But in May 2009, US defence secretary Robert Gates reported that the ABL (long plagued by budget and deadline overruns) had a lethal range of less than 140 km – far short of the planned minimum of 200 km. So, after the current round of tests (conducted at an undisclosed shorter range), efforts to reach megawatt powers will start again with lasers that use diode-pumped alkali-metal vapours. Lasers of this type currently emit just tens of watts but may eventually offer a better power-to-size ratio than the ABL.

Until those plans get off the ground – if they ever do – the bold new future of laser weapons will be solid-state lasers that emit 100 kW or more in a steady or repetitively pulsed beam. It has already been demonstrated that kilowatt-class lasers can detonate unexploded ordnance left on the battlefield by illuminating it from a safe distance. The hope is that lasers in the 100–400 kW range could also destroy rockets, mortars and shells at distances of up to a few kilometres. The close range of these targets would greatly ease beam-propagation problems that hamper laser-based missile-defence systems. Moreover, by detonating explosives in the air with laser heating, rather than firing projectiles at them, laser-based weapons could reduce “collateral damage” to friendly soldiers and non-combatants.

In March 2009 US defence giant Northrop Grumman reported continuous emission of more than 100 kW for five minutes from a laboratory diode-pumped laser. This February, Textron Systems reached the same goal with its own design. These are by far the highest continuous powers achieved in a solid-state laser. The next step will be to engineer a 100 kW laser that works on ships, trucks and planes. The US Army is moving Northrop Grumman’s device to the High Energy Laser System Test Facility at the White Sands Missile Range, New Mexico, where it is planning to try out a mobile version installed in a heavy battlefield truck. Another defence agency, DARPA, is building a lightweight 150 kW solid-state laser for use in fighter planes, while the US Navy is planning tests of similar lasers at sea.

The lasers used in all these projects mark a radical departure in laser-weapon design. Earlier weapon-class lasers were chemically fuelled, but commanders did not want them on the battlefield because handling chemical fuels posed major logistical issues. They also wanted lasers that could be powered by diesel generators. But other formidable challenges remain, including damage to the laser itself, the need to operate in a dirty battlefield environment and the expected high cost of the devices.

Bagging a few test rockets should be easy. Engineering mobile lasers that work reliably in messy places where people are shooting at them is a much tougher problem. We will probably see prototypes blasting targets out of the sky within a few years, but do not expect battlefield deployment until the 2020s at the earliest.

Free-electron lasers

John Madey is director of the FEL Laboratory at the University of Hawaii, US, and contributed to the development of free-electron lasers

Like all lasers, free-electron lasers (FELs) rely on the principle of stimulated emission to amplify a beam of light as it passes through a region of space. In other words, as electrons move from a high- to a low-energy state, they emit photons of light all at the same wavelength and all moving in the same direction. But unlike the transitions between bound electronic states in other lasers, FELs exploit another of Einstein’s key discoveries – special relativity – to provide tunable electromagnetic radiation from a beam of relativistic free electrons as they move through a spatially periodic transverse magnetic field.

According to special relativity, the electrons perceive such a field as an intense travelling wave in their rest frame, with a wavelength reduced in proportion to their kinetic energy. Photons scattered by the electrons from this pulse in the direction of their motion are reduced in wavelength once again when viewed from the laboratory frame. As a result, electrons with a kinetic energy of 50 MeV emit near-infrared radiation when moving through a field with a period of 2 cm. Light of longer and shorter wavelengths can be created by simply varying the energy of the electrons. FELs can readily provide laser light with about 1% of the instantaneous power of the electron beam – megawatts or more – and their pulse lengths can vary from less than a picosecond to full continuous-wave operation. Exceptional phase coherence is also attainable through the use of suitable interferometric resonator systems.

Serious efforts to explore the possible applications of FELs began shortly after colleagues and I at Stanford University successfully demonstrated the first optical-wavelength FEL amplifiers and oscillators in 1974 and 1976, respectively. The focus since then has been on using FELs to do things that are tricky to pull off by other means. Perhaps the best known application is to generate tunable, high peak power, coherent, femtosecond X-ray pulses at energies above 1 keV to carry out time-resolved structural and functional studies of complex individual and interacting molecules. The first such X-ray FEL is now in operation at the SLAC National Accelerator Laboratory in the US, with the European X-ray Free-Electron Laser due to come online at the DESY lab in Germany in 2014.

Even for applications where other types of lasers might be adequate, the big advantage of FELs is that they are so flexible. FELs have therefore proved invaluable in carrying out exploratory research when the requirements of a particular application have not yet been determined or when a research team does not have the time or money to develop a new specialized laser system needed to support the application. The short-pulse, high-peak-power, third-generation FELs, which were pioneered in the 1980s, have been particularly useful for developing new surgical techniques and for exploring the energy levels, band structure and mobility of electrons and holes in new electronic and optical materials, without having to worry about longer probing laser pulses damaging the material.

The more recently developed high-average-power FEL systems have extended these capabilities to include research on possible laser applications for industrial-scale materials processing. Of at least equal significance are the improvements in remote sensing for climate-change research made possible by the broad tunability, high peak power, and exceptional spatial and temporal coherence offered by FELs at visible and infrared wavelengths.

There are, however, a few clouds on the horizon for FEL research. Historically, such research has mainly taken place at a handful of small and mid-sized university and government labs in the US, Europe and Asia. A recent transition to larger national labs has brought many scientific advances, but also runs the risk of making both the science and the technology less accessible to university scientists, who may be based far away from large central facilities. It will, therefore, be critical to ensure that the customer and support base for the technology remains aware of the FEL’s smaller-scale applications, not just the signature high-power and short-wavelength ones.

Finally, there are concerns that the suppliers of FEL-supporting technologies – including high-power microwave, ultrahigh vacuum and special optical materials – may not be able to continue these product lines, given the decreasing industrial markets for them. Wise governments should take the steps needed to ensure that the know-how on which these critical national capabilities rely is not lost.

Gravitational waves

Eric Gustafson is at the California Institute of Technology, US, and leads the instrument-science group of the LIGO gravitational-wave observatory

Often referred to as “ripples in space–time”, gravitational waves are generated during extremely violent astrophysical events in which the velocities of objects such as neutron stars or black holes change by substantial fractions of the speed of light over a very brief period of time. Detecting such waves is a challenging task because, for ground-based detectors, these changes in velocity occur on timescales between a fraction of a millisecond and a few tens of milliseconds. Measuring these tiny fluctuations in the curvature of space–time requires the use of very sensitive laser interferometers, in which beams of light travel down the perpendicular arms of the device, bounce off mirrors at the far end of each arm and then return to interfere with one another. The idea is that a passing gravitational wave should change the interference pattern in a characteristic way.

As laser technology has evolved, the lasers used in gravitational-wave experiments have changed with it. The first interferometric experiment designed to detect these waves, built by Robert Forward at California’s Hughes Research Laboratory in the early 1970s, used a 75 mW helium–neon laser and was about the size of a chessboard. Forward reached an impressive sensitivity with this device, measuring the smallest vibrational displacement that had been detected with a laser to date: 1.3 × 10–14 m Hz–1/2 – equivalent to measuring changes of less than 2 mm in the distance from the Earth to the Sun. However, the poor power-scaling properties of the helium–neon laser meant it did not have a future in gravitational-wave interferometry beyond table-top experiments.

During the 1980s, several groups around the world built interferometers in ultra-high-vacuum systems, with their optics suspended to isolate them from ground noise. These experiments were between one and a few tens of metres in size and used argon-ion lasers, which operate at a wavelength of 514 nm and output several watts of power. Such interferometers were usually designed to study specific problems in gravitational-wave interferometry, such as comparing different optical configurations, finding ways to control the suspended optics and characterize noise in subsystems such as mirrors, and developing length and alignment control signals for the suspended optics.

Unfortunately, the plasma tubes used in argon-ion lasers, along with the cooling water they require, produce high levels of laser-frequency noise. What is more, the relatively short lifetimes of these tubes made them impractical for use in an observatory. Finally, the power output of the lasers – while higher than a helium–neon laser – was short of the hundreds of watts that more advanced detectors were understood to require, thanks to the fact that at high frequencies the detector sensitivity is limited by shot noise.

In the 1990s, as the current group of kilometre-scale observatories (LIGO in the US, VIRGO in Italy and GEO in Germany) were being planned and built, diode-pumped solid-state lasers became available. These lasers not only had much lower levels of frequency noise than argon-ion lasers, but also the potential to produce much higher power. Initially, their maximum power output was about 10 W, but improved diode-pumped lasers and the use of injection-locked power oscillators or master-oscillator power-amplifier configurations made 100 W-class lasers possible for a new generation of interferometers. These new interferometers will be deployed over the next few years at LIGO and VIRGO, and will use 200 W lasers. Meanwhile, GEO will use a squeezed-light technique to produce better shot-noise performance at lower laser power. For space-based instruments such as the Laser Interferometer Space Antenna (LISA), diode-pumped solid-state lasers were selected not for their high power potential but for their very high efficiency and reliability, characteristics that are especially important for a space-based mission.

It is not clear exactly what lasers or wavelengths will be required for future ground-based detectors. We may see slightly longer wavelengths selected that can be used with new mirror-substrate materials that are opaque at 1064 nm; equally well, we might see shorter wavelengths that allow us to use thinner mirror coatings, thus reducing the thermal noise produced. It is possible that as researchers begin to look for the “right” wavelength to optimize sensitivity, we will find that we need wavelengths that can only be produced via the nonlinear frequency conversion of solid-state lasers – and so our choice of lasers may continue to evolve.

• See more interviews with laser experts on our multimedia pages.

Fusion’s bright new dawn

 

Three days after Theodore Maiman demonstrated the first ruby laser at his laboratory in Malibu, California, in May 1960, a scientist a few miles away at the Lawrence Livermore National Laboratory came up with an idea for using lasers to harness the power source of the stars. Although details of Maiman’s device would not emerge for several weeks, scientists already knew that a laser’s ability to concentrate energy in time and space would be unprecedented. Might it be possible, the Livermore scientist wondered, to use lasers to fuse small atoms together to create a heavier, more stable atom – releasing huge amounts of energy in the process?

Thanks to the levels of secrecy prevalent at the time concerning atomic matters, it would be another 12 years before the scientist in question, John Nuckolls, articulated his ideas about laser fusion for the broader scientific community. Writing in Nature, Nuckolls and his colleagues explained that in order for their scheme to work, a large-scale laser had to be built – one that could compress and heat the fusion fuel to a temperature of 108 K and densities 1000 times that of liquids, conditions that surpass even those found at the centre of the Sun.

Nuckolls’ team predicted that a laser with an energy of 1 kJ and a pulse length of a few nanoseconds would be sufficient to initiate the process, although a much larger laser (a few megajoules, it was estimated) would be required to produce a substantial energy output. Scientific excitement over this idea – coupled with a succession of energy crises in the 1970s and 1980s – led to the construction of a series of increasingly large lasers to test the concept. Unfortunately, these experiments proved that the journey would be much harder than predicted: the threshold itself was likely at the megajoule level, thanks to the need to overcome a range of instabilities that hampered efforts to couple laser energy to the fuel and then compress it to the required densities.

Yet after years of intermittent successes and setbacks, we are finally entering a truly exciting period in the world of laser fusion. The past decade has seen unprecedented sums of money invested in the field, with the principal aim of demonstrating, once and for all, that the science of laser fusion really works. The recently completed US National Ignition Facility (NIF), located at the same lab where Nuckolls had his big idea 50 years ago, is among the most tangible results of this effort (see “The National Ignition Facility”). And a little over a year after NIF officially opened, scientists there are now on the brink of a breakthrough: crossing the required threshold for the instigation of a self-sustaining fusion reaction, leading to a net release of energy for the first time.

The achievement of this 50-year-old goal – known technically as “ignition” – will be a game-changing event that will propel laser fusion from an elusive phenomenon of physics to a predictable, controllable, technological process ready to address one of society’s most profound challenges: finding an enduring, safe and environmentally sustainable source of energy. The NIF plan is to ensure that this milestone is reached within the next two years.

Making a star in the lab

The history of fusion can be traced back to 1920, when Francis William Aston discovered that four separate hydrogen nuclei are heavier than a single helium nucleus. This occurs because the stability of helium leads to a lower overall rest mass. On the basis of this work, another British scientist, Arthur Eddington, proposed that the Sun could get its energy from converting hy_drogen nuclei into helium nuclei, releasing just less than 1% of the mass as energy, according to Einstein’s famous equation E = mc2. Then, in 1939, Hans Bethe distilled these facts into a quantitative theory of energy production in stars, which eventually won him the 1968 Nobel Prize for Physics.

Although the Sun and other stars generate fusion by using their gravitational energy to compress hydrogen (and subsequently heavier elements), for any terrestrial effort it makes more sense to use a fuel source composed of deuterium and tritium. These isotopes of hydrogen contain one and two neutrons, respectively (see “Getting it together”). They have the highest cross-section for fusion since they have low charge (just a single proton each) and the proton and neutron(s) are not very tightly bound. In the basic fusion reaction, deuterium (D) and tritium (T) combine to form helium and a very energetic neutron: 2D + 3T → 4He (3.5 MeV) + n (14.1 MeV)

In order for this reaction to take place, the particles need to be moving at very high velocities to overcome the Coulomb barrier, since the positive ions experience an increasingly strong repulsive force as they get closer and closer together. This means that the fuel needs to be heated to an incredible 108 K. Under these conditions, electrons are stripped from their parent nuclei, turning the fuel into a plasma.

The need to create high-temperature plasmas for fusion to occur explains why fusion is not a process we encounter in everyday life on Earth, and why it is so incredibly difficult to harness as a net source of power. On a positive note, this does introduce one major benefit: unlike nuclear fission, which can lead to an uncontrolled “chain reaction”, the fusion process is inherently safe since the fuel “wants” to be inert, and thus loses energy at any opportunity. And thanks to the stars, we know categorically that fusion works – we just need to find an alternative to the Sun’s use of gravity to provide the heating and confinement of our fuel.

There are two principal routes to achieving confinement: we can either hold the plasma in a magnetic field while heating it using radio waves or particle beams; or we can compress it to unprecedented densities using lasers. The first approach is being pursued through the ITER magnetic-confinement fusion experiment currently being built in Cadarache, France, while the latter is being studied at handful of labs – including NIF – using some of the world’s largest lasers.

How laser fusion works

The laser route to fusion neatly combines two of Einstein’s most famous contributions to science: his explanation of stimulated emission; and his quantification of the equivalence of mass and energy. The basic approach is a repetitively cycled system in which ball-bearing-sized pellets of deuterium–tritium fuel (see “On target”) are injected into the centre of a large, empty chamber. A number of powerful laser beams are used to compress the fuel to densities of 1000 g cm–3, or about 100 times the density of lead, for a few millionths of a millionth of a second (10–12 s). Of course, this high-density fuel will subsequently blow apart – but not instantaneously. It will persist at high densities on a timescale determined by its inertia and characterized by the time taken for a sound wave to propagate across the imploded assembly. This “self-confinement” phenomenon has led to the process being called “inertial-confinement fusion”, and it gives the system sufficient time to allow a substantial fraction of the fuel (typically 30%) to be converted to helium and a neutron.

The first fusion reaction produces a helium ion that deposits its energy in the neighbouring fuel, thus allowing the high temperatures to be maintained and the fusion reaction to propagate through the fuel. The high-energy neutron, however, escapes, since it interacts only weakly with the charged plasma. The neutron’s energy is therefore carried into a thick “blanket” of material surrounding the interaction chamber, heating the blanket to about 1000 K. In a fusion power plant, the process would be repeated about 10 times per second, and the heat would be used to drive an advanced gas-turbine cycle, thereby generating electricity.

The physics underpinning laser fusion is actually quite well understood. Moreover, thanks to a series of experiments performed by UK and then US scientists in the 1980s (see Physics World March p23, print edition only), we know that ignition and energy production can be attained here on Earth if we have a sufficiently powerful driver. These experiments, which used the X-ray output of an exploding thermonuclear bomb to implode the pellets, can be viewed as the ultimate “swords into plough shares” demonstration. What remains is to prove that a laser can be used as the driving source, and to demonstrate that the emitted fusion energy can be harnessed at a level compatible with a full-scale power plant.

The deuterium in the fuel pellet is sourced from water, which naturally contains about one molecule of D2O for every 6000 molecules of H2O. The tritium, in contrast, must be manufactured in situ by bombarding lithium-6 atoms with neutrons, thereby transmuting the lithium into tritium and helium. Here, we can use a neat trick: if we construct the blanket surrounding the fuel pellet with lithium-6, we can use the neutrons produced in the fusion reaction to generate more tritium (as well as producing the heat for the electricity turbine). In practice, it is a little more complicated than this, because we have to ensure that there are enough excess neutrons to create a closed fuel cycle; however, this can be achieved by adding other materials (principally lithium-7, beryllium or lead) to the blanket.

On the laser side, Nuckolls’ original predictions that a relatively small-scale laser would be sufficient to create the required conditions turned out to be correct only if there is freedom to drive the implosion at an arbitrarily high velocity. This is not possible due to various unstable, nonlinear processes in which the laser can set off electron or ion “waves” in the plasma, or cause the imploding fuel to break up prior to reaching high compression. For example, when high-intensity lasers heat matter, they can resonantly drive an oscillation in the plasma, thus causing the light to scatter off the plasma wave and preventing the fuel from absorbing it efficiently. If the laser intensity is too low, however, then the pellet implosion is driven at such a low velocity that any imperfections arising from surface roughness or laser non-uniformities seed the growth of hydrodynamic instabilities, leading to total break up of the imploding shell prior to full compression.

It has taken many decades to adequately understand these processes, and their existence has meant that a laser roughly 1000 times the scale originally envisaged by Nuckolls has to be used. The lasers at NIF – which have been performing remarkably well in their initial phase of operation – are designed to mitigate the growth of these plasma and hydrodynamic instabilities. Much attention has been paid to ensuring a sufficiently “smooth” laser beam, with control over its temporal profile to allow quasi-isentropic compression of the fuel by launching a series of precisely tailored shocks.

From fusion to electricity

Fusion physicists are so confident that NIF will be able to “ignite” a self-sustaining fusion reaction that attention is now turning to the endgame. The next problem is how to best harness the emitted neutrons in a manner compatible with a robust, commercially viable power plant. Such a plant would operate conceptually like a car engine, with three key stages.

In the first step, fuel – in the form of a ball-bearing-sized pellet of frozen hydrogen isotopes, held at temperatures of about 18 K – is injected into a multi-metre-diameter vacuum chamber. Next, a laser “piston” compresses the fuel by heating the outer surface of the pellet to create a hot, spherically expanding gas. In order to conserve momentum, the rest of the pellet is forced to move rapidly inwards at velocities of more than 105 m s–1. The degree of compression achieved in this process is similar to squashing a basketball down to the size of a pea.

In advanced schemes – analogous to a petrol engine – a separate laser is then used as a “spark plug” to ignite the fuel at the instant of maximum compression. Adding in this extra laser could lead to a more efficient (higher gain) system, but it is not an essential requirement: if we compress the fuel enough, the compression alone will generate enough heat to create a hot “spark” at the centre of the imploding fuel. When the temperature is high enough, and enough mass has been imploded to an appropriately high density, fusion is initiated in a self-sustaining manner. The helium nucleus from one reaction heats the neighbouring fuel, while the neutron escapes to heat the external blanket to generate electricity.

The final step occurs when the spent fuel is exhausted out of the chamber. At this point the cycle repeats. In a car engine, the fuel cycle is repeated about 50–100 times per second. The repetition rate for laser fusion is lower: 10 times a second would be enough to produce electricity on the gigawatt scale, comparable to the largest coal, gas or fission power stations. However, that rate is simply not possible with NIF, which fires only once every few hours. New technology is needed to convert the scientific demonstration on NIF into a constantly cycling system that can generate electricity.

One project that aims to bridge the gap between achieving ignition and building a practical fusion power plant is the High Power laser Energy Research facility, or HiPER. Led by the UK and involving a 10-nation consortium of researchers and funding bodies, HiPER’s goal is to demonstrate the 10 Hz level of performance of all the component technologies for power-plant-scale operation within the next 10 years. To do this, we hope to draw on innovations that are taking place elsewhere in laser science, including the high-repetition-rate technology used in the welding and machining industry, and several ongoing high-power-laser research projects. One example of the latter is the Extreme Light Infrastructure (ELI) project, a €750m effort led by the Czech Republic, Hungary and Romania (see pp12–13, print edition only) that seeks to create laser pulses with peak powers of up to a few hundred petawatts (about 1017 W) using the same type of diode-pumped laser technology that HiPER will require (see “Laser technology for fusion power”).

Over the past few decades, lasers have developed at an incredibly fast pace, allowing fusion researchers to take advantage of rapid increases in power and efficiency. Using lasers also allows us to adopt a modular, maintainable and easily upgraded approach to power-plant design during HiPER’s second phase, in which we plan to build a facility that combines the scientific demonstration of ignition at NIF with high-repetition-rate laser technology. This modular strategy should reduce the timescale for construction, increase power-plant availability throughout its life, and ensure that we find the most cost-efficient solution.

At the same time as Europe is devoting resources to HiPER, US scientists are planning a similar journey with the aptly named LIFE project (Laser Inertial Fusion Engine). Led by the scientists who worked on NIF, this project has the same goal as HiPER: to demonstrate the required high-repetition-rate technology, integrated into a power-plant-scale facility. Scientists in Japan, meanwhile, have well-defined plans for demonstrating the “petrol engine” approach to power generation described above. Thanks to these efforts, it is looking increasingly likely that reaching ignition at NIF will remove the question of whether laser-fusion power will be achieved, to replace it with the more political question of who is likely to deliver the first working power plant.

Towards a working power plant

The achievement of ignition at NIF will provide the ultimate verification of the scientific basis of laser-fusion energy, marking the culmination of 50 years’ effort. Yet the second milestone – a working fusion power plant – is the real goal, motivated by the demand for a sustainable, low-carbon economy. As we have already seen, the principal ingredients in fusion are deuterium, which is found in water, and lithium, which occurs naturally in igneous rocks and some types of clay, as well as in seawater. The Earth contains enough of both ingredients to last for millennia. In fact, based on current rates of electricity consumption in the UK, just one bathtub of water and the lithium from two laptop batteries would provide enough electricity for an individual’s entire lifetime.

Furthermore, fusion produces no greenhouse-gas emissions and has a low environmental impact over the life-cycle of a plant. The chief waste product is inert helium gas, and the residual radioactivity at the plant itself should be manageable using conventional decommissioning techniques over a period of 100 years. Fusion plants will have power outputs of as much as 1–2 GW, making them ideally suited as large, central facilities on the existing electricity-grid infrastructure. Other benefits include the high-temperature environment of the blanket, which could be used to generate hydrogen for fuel cells or even to desalinate water. These wider applications, as much as their electricity output, may be the crucial factor that will determine the commercial viability of early fusion power plants, and thus the timescale for delivery of the first generation of facilities.

In the meantime, laser facilities used in the pursuit of fusion can also be exploited for pure research. The topics range from studies of astrophysical processes such as nucleosynthesis, cosmic-ray generation, proto-stellar jets and planetary-nebulae formation, to the research into the cores of gas-giant planets and the origins of the Earth’s magnetic field. The lasers could also underpin a host of fundamental studies in areas as diverse as atomic physics, nuclear science, turbulence and the creation of macroscopic quantities of relativistic matter.

Perhaps just as importantly, the component technologies used in fusion research – not least the highly efficient, high-power lasers themselves – open up a wide range of spin-off opportunities. These range from security screening for nuclear materials at ports and the production of medical radioisotopes to the treatment of deep-seated tumours via particle-beam therapy, the processing of materials for the aerospace industry and even the development of next-generation light sources.

Pursuing a future energy source based on lasers still faces huge technological challenges in advanced materials, micro-scale engineering, laser technology and integrated power-plant systems. But the wider market for the high peak-power, high average-power laser systems allows the fusion field to build from a well-developed industrial base, and to borrow advances from other projects to accelerate the timescale to delivery. We have been waiting 50 years for the scientific proof that controlled fusion works. Now that this proof is almost upon us, we need to make sure we capitalize on it to ensure that we do not have to wait a further 50 years to see it used.

It's all relative

chair.jpg
Einstein with his dog friend Hannah. Credit: Charlie Cantrell

By Louise Mayor

Meet Einstein, the world’s smallest horse. He is a pinto stallion, born last week weighing just 2.7 kg, and only 35 cm tall.

I got in touch with Einstein’s owners, Charlie Cantrell and Dr Rachel Wagner from New Hampshire, US, to ask what inspired the naming of this cute little fella.

They replied to say that they named him Einstein because they felt it would be a reminder of two things. The first was “that one has to be intelligent when purchasing a miniature horse [as] they require a massive amount of specialized care.” The second was that “Einstein believed in compassion for all living creatures. He was an advocate of humane treatment of animals.”

“Oh yes, and we thought the name was befitting of him because he had such a huge head for such a little foal,” they added.

So it turns out that Einstein the foal is aptly named! Indeed, supporting the owner’s comments is this animal-friendly quote from Einstein: “Our task must be to free ourselves – by widening our circle of compassion to embrace all living creatures and the whole of nature and its beauty.”

Melting ice amplifies Arctic warming

A pair of researchers in Australia claim to have the best evidence yet that the pronounced warming in the Arctic is due to the melting of sea ice in that region. They warn that this ice temperature feedback system makes the polar region susceptible to further rapid warming.

The rise in Arctic temperatures near the land surface has been nearly twice the global average in recent decades – a phenomenon known as “Arctic amplification”. Climate scientists had expected this general trend, knowing that the removal of sea ice would lead to the Arctic absorbing more solar radiation. This is because sea water is less reflective than ice.

In many climate models, however, these rapid temperature rises are also strongly influenced by changes higher up in the atmosphere – such as variation in cloud cover and water vapour. So it has been difficult to fully understand the contribution of melting sea ice. Much of the variation between models is because they use different data sets and they merge information in different ways.

Climate modellers gather climate measurements from a range of sources – weather stations, satellites, planes, etc – and feed these into forecasting programmes, which effectively “fill in the gaps”. The challenge is to find the most appropriate way to incorporate a mix of different observations, which combine atmospheric changes with conditions at the Earth’s surface. “Past efforts have encountered artificial trends that reflect changes in the umber and type of observations and not in the real climate,” says James Screen, a climate scientist at the University of Melbourne.

‘Re-analysis’

Screen, working with his Melbourne colleague Ian Simmonds, claims to offer a more accurate picture of the changing Arctic climate. The researchers took data from an international “re-analysis project” by the European Centre for Medium-Range Weather Forecasts (ECMWF), which has been collating Arctic climate data for the past 15 years. The project’s latest data set, used in this research, is supposed to be the most consistent to date, incorporating archived data not available in previous analyses.

Screen and Simmonds discovered that warming has been most significant in the lower atmosphere close to the sea surface. The finding led them to link the temperature rises with melting sea ice, rather than processes higher up in the atmosphere, which would trigger warming over a wider vertical region. “It was previously thought that loss of sea ice could cause further warming. Now we have confirmation this is already happening,” Screen tells physicsworld.com.

If the Australian research team is correct then this could leave the Arctic facing further climate amplification, with the melting sea ice and the rising temperatures locked in a feedback system. “The most immediate impacts are likely to be local, for example on Arctic ecosystems and the indigenous communities. In the longer term, if the current trends continue, there is the threat of increased melting of the Greenland ice sheet,” says Screen.

Not all bad?

Jeff Ridley, a climate scientist at the UK’s Met Office, agrees that Arctic sea ice is highly sensitive to rising atmospheric temperatures. “There is absolutely no doubt that increasing global temperatures will result in more Arctic ice melt, eventually with September becoming ice free,” he says.

Ridley is aware, however, of the problems in making firm conclusions about how the Arctic might respond in the future. “There is very considerable inter-decadal variability in the Arctic but the authors do a reasonable job with the length of re-analysis dataset available. The implications for Arctic ecosystems are not all negative, but there will be change,” he adds.

Screen and Simmonds intend to develop their research by looking more closely at the mechanisms by which diminishing sea ice might amplify Arctic temperature trends and to look at the causes of the reduction in sea ice itself.

This research is published in this week’s Nature.

Antarctic conveyor belt revealed in detail

Researchers in Australia and Japan have offered our best view yet of a super-fast flow of water known to emerge from the depths below Antarctica. This flow, known as the Antarctic Bottom Water, feeds into the global ocean circulation, which is responsible for redistributing heat and salt throughout the world’s oceans.

Ocean scientists know that large quantities of near-freezing water, with temperatures of around –1.9 °C, sink at four main sites in the shelf seas around Antarctica. As these dense waters plummet to depth they gather momentum, entraining lighter waters, before veering north along the continental slope. The flow continues along the floor of the Southern Ocean, making its way along myriad channels until it finally reaches the adjoining ocean basins.

Very few specific details are known, however, about the flow of this Antarctic Bottom Water. Researchers know little for instance, about the quantities of water involved in this process and the speeds at which they travel. These are the kind of details that Yasushi Fukamachi at Hokkaido University, working with colleagues in Australia, have gleaned in this latest research. They deployed a number of mooring stations, equipped with sensitive equipment, to track the flow of the Antarctic Bottom Water in the shelf seas around Antarctica.

Narrow and intense flow

The researchers discovered a narrow and intense northward flow extending through the water column east of the Kerguelen Plateau – an underwater volcanic structure 3000 km to the southwest of Australia. Current meters also reveal the waters to be flowing at speeds exceeding 20 cm /s at depths below 3000 m, the highest speeds yet for this depth of water.

Our measurement was not the first to measure flow and velocity of Antarctic Bottom Water. However, we believe that ours is the best focused, fine-scale measurement ever. Yasushi Fukamachi, Hokkaido University

Fukamachi tells physicsworld.com that he was surprised by the “massive amount” of Antarctic Bottom Water carried by the deep currents. Indeed the researchers report a mean equator-ward flow of water colder than 0 °C exceeding 12 million cubic metres every second, which is compensated only partially by poleward flow. “Our measurement was not the first to measure flow and velocity of Antarctic Bottom Water. However, we believe that ours is the best focused, fine-scale measurement ever,” he says.

The findings shed light on the origins of the Antarctic Bottom Water, but more work needs to be done to track the full cycle of this flow. “We need to carry out the mooring measurement in the downstream regions of the Kerguelen plateau to evaluate the fate of the deep current to the east,” says Fukamachi.

Maintaining mild climates

Studying the Antarctic Bottom Water could give ocean scientists a clearer picture of deep water circulation throughout the southern hemisphere. This process, like North Atlantic circulation in the northern hemisphere, redistributes heat and salt to maintain the relatively mild global climate. It also supplies oxygen to the deep ocean.

Fukamachi warns that melting glaciers, as a result of rising sea and air temperatures, could disrupt this circulation. He explains that, “The resulting fresher surface water weakens the sinking of dense water around Antarctica and hence the outflow of the Antarctic Bottom Water. “Although we do not have observational data to show the weakened outflow of the Antarctic Bottom Water, the ship-based observations carried out during our mooring recovery cruise show the rapid decline of salinity of the Antarctic Bottom Water throughout the Australian Antarctic Basin.”

James McWilliams, an atmospheric and oceanic scientist at the University of California, Los Angeles, agrees that the Antarctic Bottom Water is vulnerable to changes in the global climate system. “Anthropogenic global change is warming and freshening the surface waters in subpolar regions, thus weakening the forcing for sinking into the abyss,” he says. He adds, “How much the sinking patterns will actually change, and how global climate will change as a result, are subtle issues for modelling to assess.”

This research is published in Nature Geoscience.

Nanotube ‘fuzz’ boosts optical performance

A new device that controls light using an array of tiny gold structures coated with carbon nanotubes has been developed by physicists in the UK and Italy. Based on a “photonic metamaterial”, the devices could find use in lasers and optical communications components.

Metamaterials are artificial materials containing arrays of tiny structures such as rods and rings that respond to light and other electromagnetic waves in unusual ways. For example, a metamaterial can be designed to have a refractive index that varies throughout the material, and which can even be negative in some cases. Such unique properties mean that metamaterials have already been used to make “super-lenses”, which beat the diffraction limit, and “invisibility cloaks” for microwaves.

The performance of such metamaterials has been limited by their relatively weak interaction with light. But now, Nikolay Zheludev and colleagues at the University of Southampton and researchers at the Italian Institute of Technology in Catanzaro have found a way around this problem by creating a new hybrid metamaterial that combines carbon nanotubes and a metamaterial made of gold. The resulting composite has an enhanced response to light – that is, its refractive index changes very strongly and quickly compared to other metamaterials when exposed to a light beam. Indeed, it reacts in less than a picosecond, as opposed to microseconds for liquid crystals, for example.

Groovy gold

The team fabricated the metamaterial in two stages. They began with a thin flat layer of gold that they bombarded with a beam of gallium atoms to cut grooves about 25 nm wide. The grooves formed a regular array with each unit cell comprising a square with 500 nm sides, two of which are broken (see “Array of squares cut into gold”).

This structure is called a “plasmonic metamaterial” because its optical properties involve surface plasmons. These are collective oscillations of electrons on the surface of a metal that interact with light. Next, the team sprayed a solution of carbon nanotubes – tiny tubes with walls just one atom thick – onto this metamaterial and then dried it to create a fuzzy layer about 50 nm thick. They call this layer a nanoscale “feutre” from the French for felt.

CNTs were used because they possess unique nonlinear optical properties, and interact with both light and plasmons. They are also simple and cheap to make, robust and can easily be integrated into waveguides and other photonic devices.

q-switching components

The resulting structure could be used to make optical limiters, which are components that prevent a surge of power in optical networks. They could also find use in “mode-locking” and “q-switching” components of lasers that allow short pulses to be generated; as well as nanoscale circuits that process optical data signals.

Akhlesh Lakhtakia of Penn State University told physicsworld.com that using carbon nanotubes in this way is an important first step towards creating commercial devices based on structured metallic inclusions, adding that the nanotubes could also effectively compensate for optical losses in finished devices.

Zheludev’s team now plans to develop other types of nonlinear, switchable and controllable metamaterials.

The work is described in Phys. Rev. Lett. 104 153902.

Evidence grows for tetraquarks

The existence of a new form of matter called a tetraquark has been given further support by the re-analysis of an experiment that has baffled particle physicists for the past two years.

In 2008 researchers on the BELLE experiment at the KEK Laboratory in Japan looked at how an excited state of the meson “bottomonium” decayed and were very surprised to find that one particular decay mode was much more common than expected.

Now, physicists in Germany and Pakistan have proposed an extraordinary explanation – instead of producing bottomonium, the experiment had created a new particle containing four quarks. If such tetraquarks do exist, it would lead to an extended quark model of exotic particles. It would also give physicists a deeper understanding of quantum chromodynamics (QCD) – the Standard Model’s theory of quarks and the strong force that binds them together.

In the 1960s physicists realized that hadrons – protons, neutrons, mesons and more – could be described in terms of their constituent quarks. Mesons form a bound state of a quark and antiquark pair, whereas baryons (including protons and neutrons) are made up of three quarks or three antiquarks. The quark model won its pioneer Murray Gell-Mann the 1969 Nobel Prize in Physics and has gone on to predict the existence and properties of many different hadrons.

Exotic bound states

However, QCD allows other, exotic bound states to exist. One of these is the tetraquark, which comprises two quarks and two antiquarks. For decades, particle physicists have been curious about the existence of tetraquarks, and in recent years experiments are becoming sensitive enough to see hints of them.

If tetraquarks exist, there’s a good chance that they will be seen by physicists working on electron–positron colliders at KEK in Japan and SLAC in California. Both facilities can be tuned to produce excited states of “heavy quarkonia” mesons such as bottomonium, which is made from bottom and anti-bottom quarks. Both BELLE and the BaBar experiment at SLAC are designed to measure the decays of these short-lived particles and search for small deviations from theoretical predictions. So far both experiments have spotted several unmistakable anomalies.

Baffling bottomonium results

In 2008 BELLE physicists were studying decays of the highly excited Y(5S) state of bottomonium. According to QCD, an excited state of Y will rarely decay into one of its less excited states and a pair of charged pi mesons (pions). However, when BELLE measured this decay channel for Y(5S), the observed rates were several orders of magnitude larger than expected.

One possible explanation is that the electron–positron collisions tuned to form Y(5S) may have actually been producing a different particle altogether – a tetraquark meson Yb(10890). Ahmed Ali and Christian Hambrock of the DESY Collaboration and Jamil Aslam, from Quaid-i-Azam University in Pakistan, have been investigating this hypothesis.

Ali explains why it makes sense: “If we assume that tetraquarks exist, we can consider what masses are possible, and how they will decay. We find that the Yb(10890), a tetraquark meson formed of an up flavour “diquark” (quark-antiquark pair) and a bottom flavour diquark, is very close in mass to Y(5S). It can decay to Y(2S) and a pion pair in several ways, and if we calculate these we find that it can reproduce the data.”

Though not conclusive, this evidence further supports the possibility that tetraquarks exist. “If established to be true”, says Ali, “this is a brand new form of matter.”

Mystery solved?

So is the mystery solved? Not quite, according to Hambrock, who explains: “This is an indication, but not proof. From this result alone we can’t be certain that the two diquarks were truly in a bound state.” There are also other ideas for what could be causing the increased rates, as BELLE’s co-spokesperson, Tom Browder of the University of Hawaii points out. “Perhaps some other mechanism [in the interaction of Y(5S)] could explain the results. We are experimentalists and have to keep an open mind.”

If Yb(10890) is the cause of the anomaly, there is another clue waiting to be uncovered according to Ali. “If we are right, it should actually be a combination of two barely distinguishable tetraquarks, the other formed of a down diquark and a bottom diquark, with an almost identical mass,” he explained. With a short data run scheduled in May this year, BELLE physicists will try to deduce whether Ali’s prediction is correct.

The research is described in Phys. Rev. Lett. 104 162001.

Dancing to the Fusionopolis beat

chair.jpg
The roof garden at Fusionopolis

By Michael Banks in Singapore

It is not every day that I get to dance with the popstar Shakira, but that is what I did this morning. Well, in the world of virtual reality at least.

Today I visited what is known as the Fusionopolis complex here in Singapore. The centre, which consists of three separate towers, each 100 m tall, contains government-funded research institutes, international companies and a range of facilities such as a gym, a theatre and restaurants – all under one roof.

The idea of Fusionopolis is to attract multinational companies from all over the world as well as local firms to establish a research and development base in the country.

There are six government-funded applied research institutes that will eventually be housed in Fusionopolis, which are each encouraged to collaborate and share their facilities with companies who put a base there.

Venturing into Fusionopolis this morning did feel a little like taking a trip into the future. I first visited FusionWorld, which showcases some of the applications that have emerged from research carried out by the six institutes.

One example was the so-called “transparent self-cleaning windows” – essentially glass coated with a substance that reacts with dirt and grease, turning it into something that is easily washed away when it rains.

But then the tour turned to more dreamier concepts. Indeed, one involved sleep itself – researchers have developed a sensor that can be put into a mattress that is sensitive enough to detect breathing and whether someone moves around in the bed. One application for this is in hospitals where an alarm can be sent to a doctor or nurse if a patient stops breathing.

I will leave it to your imagination to picture the “demonstration home”, which included a kitchen that can tell you when you run out of eggs or if they are out of date, a toilet that can diagnose ailments, or a computer display that can be controlled with a laser pointer.

By the end, Fusionopolis had a slight whiff of Jurassic Park about it and I thought that at any moment they would pull away the curtains to reveal tiny dinosaurs walking around in a pen. But maybe that was my imagination running away with me.

One of my next stops in Fusionopolis was the Institute for Infocomm Research, where researchers are developing new wireless network protocols as well as new methods of data compression for audio files.

book_100x155.jpg
The next phase of Fusionopolis

In one room they have been developing a virtual reality system that can process a person’s movements and then display that action through a computer-generated figure on a 3D screen.

The perfect demonstration of such a technology is, of course, dancing. So after some persuasion I stepped (or was forced) onto the dance floor where I “threw some shapes”.

The system works by picking up movements with a camera, which then sends a signal to the computer that immediately renders the 3D figure to copy your dance moves (in my case the figure was Shakira, but you could select other characters).

After taking off my dancing shoes, we moved quickly to another virtual-reality room. Unfortunately, I did not get to test the “tennis simulation room” where players can run around (it even has artificial turf) wearing 3D glasses and pretending to hit a virtual tennis ball at the screen, which itself covers the whole wall.

Apparently the system was being upgraded so that it can handle two players instead of just one. I was told by Susanto Rahardja, director of research of the Infocomm institute, that the system was sensitive enough to detect whether your wrist movement would put top-spin on the ball.

Fusionopolis is certainly a vibrant place and you get the sense that there is a lot of excitement about the project, which is expected to be fully complete by the end of the decade once all six institutes are housed there.

Indeed, companies are flocking to join the institute, including computer giant HP, which announced In January that it would open a research centre in Fusionopolis. Two other buildings are currently being constructed for Fusionopolis in what is known as phase 2A and phase 2B, which will be able to house more companies and facilities.

Overall the technology and research on show was quite impressive. Indeed, when browsing the glossy brochures after my visit I was almost expecting to see videos and hear audio straight from the pages. I didn’t, of course, but possibly that is something the Fusionopolis researchers are working on right at this minute.

Site chosen for European superscope

A site at Cerro Armazones in Chile has been chosen for a new €1bn super telescope being planned by the European Southern Observatory (ESO). The site for the European Extremely Large Telescope (E-ELT) was selected yesterday by the ESO Council from a shortlist of five, beating off three others in Chile and one at La Palma in the Canary Islands.

Cerro Armazones, which is about 20 km from ESO’s existing Paranal Observatory, was picked because it has the “best balance of sky quality across all aspects” and because the 42 m diameter E-ELT could then be operated in “an integrated fashion” with the Paranal Observatory, which is home to ESO’s existing Very Large Telescope (VLT).

“Adding the transformational scientific capabilities of the E-ELT to the already tremendously powerful integrated VLT observatory guarantees the long-term future of Paranal as the most advanced optical/infrared observatory in the world,” explained Tim de Zeeuw, ESO’s director general.

‘Ambitious project’

“This is an important milestone that allows us to finalize the baseline design of this very ambitious project, which will vastly advance astronomical knowledge,” de Zeeuw added.

E-ELT’s primary mirror will be 42 m in diameter, made from 984 smaller segments that are each 1.45 m wide. The secondary mirror will be up to 6 m in diameter. A tertiary mirror will pass the light on to a comprehensive adaptive-optics suite, consisting of another two mirrors, one of which will be continuously shape-controlled by more than 5000 actuators, thereby correcting for any blurring caused by the Earth’s atmosphere.

The telescope will be sensitive enough to detect reflected light from Jupiter-like and potentially Earth-like planets orbiting stars other than the Sun – and will try to probe their atmospheres using low- resolution spectroscopy. It will even be able to detect water and organic molecules in gas clouds around stars, thus providing clues as to which planets may become habitable in the future.

Construction of the facility is expected to begin at the end of 2010 and the telescope should be operational by 2018.

Two more big projects

E-ELT is not the only large telescope in the planning stages. The Thirty Meter Telescope (TMT) will be built by a Canada/US consortium at Mauna Kea in Hawaii and should also be operational by 2018. TMT’s 30 m diameter mirror will be made from 492 individual segments. The telescope will operate in wavelengths from ultraviolet to mid-infrared, enabling astronomers to study the origin and evolution of planets, stars and galaxies.

An Australia/US consortium plans to build the Giant Magellan Telescope (GMT) at Cerro Las Campanas in Chile. This instrument will have a primary mirror consisting of six 8.4 m diameter individual segments surrounding a seventh central mirror. The GMT could be working by 2018 and will seek to shed light on planets beyond our solar system, determine the nature of dark matter and dark energy, study the origin of chemical elements and investigate the growth of black holes. It will operate at visible, near- and mid-infrared wavelengths.

Copyright © 2026 by IOP Publishing Ltd and individual contributors