Skip to main content

Invisibility cloak gives sound performance

By drilling holes in a piece of PVC and then filling them with soft plastic, scientists in Germany have built a device that can effectively make objects invisible to sound waves. The performance of the acoustic “invisibility cloak” exceeds that of existing electromagnetic devices and could open up new ways of manipulating waves, including the development of shields against seismic waves.

Invisibility cloaks are designed to hide objects from view by causing waves that would otherwise scatter off an object to instead pass around it as if it were not there. Much of the research in this area to date has concentrated on electromagnetic cloaking, and has included the construction of a device that reduces microwave scattering near a copper cylinder and the fabrication of “carpet cloaks” made from the mineral calcite that can hide objects lying on a surface.

The technique of transformation optics that is used to design and build the electromagnetic devices can also be used to develop cloaking against other kinds of waves. In 2009 Stefan Enoch of the Fresnel Institute in Marseille, France, and colleagues put forward a theoretical design for an acoustic cloak made up of concentric rings of materials with differing values of elasticity (Young’s modulus), and now a simplified version of this device has been made.

Concentric rings

The work was done by Martin Wegener, Nicolas Stenger and Manfred Wilhelm of the Karlsruhe Institute of Technology in Germany, who made the cloak from a 15 cm diameter, 1 mm thick disc of polyvinyl chloride (PVC), into which they etched holes arranged in 20 concentric rings. A circular region was left in the centre of the disc just slightly larger than a two-euro coin. This region constituted the item to be hidden. By filling the holes with polydimethylsiloxane (PDMS) and varying the size and spacing between holes from ring to ring, the researchers were able to vary the Young’s modulus across the disc in such a way that elastic (sound) waves within a certain frequency range approaching the disc are bent round the central region and then reform as if there had been no obstacle in the way.

Young’s modulus determines the speed of sound waves through a material, and by varying it across the disc, the team was able to manipulate incoming waves such that their velocity towards the centre of the cloak approaches zero while their velocity around the circumference of the cloak increases. This allowed the waves to travel around the central region and reach the far side of the cloak as quickly as they would have done had there been no cloak. “There is no difference in the time and direction of the emerging wave,” Wegener says. “So the situation is indistinguishable from there being nothing in the way.”

To test their idea, the researchers placed a loudspeaker at one end of the PVC sheet at the centre of which they had carved out the cloak. They then recorded the propagation of sound waves crossing the sheet using stroboscopic lighting and a camera positioned above. The researchers did the same thing with a sheet containing only the central obstacle and found that at frequencies between 200–400 Hz the presence of the cloak allowed the waves to propagate essentially as they would have done without an obstacle in the way, whereas with no cloak the waves were broken up by the obstacle.

Simple and impressive

John Pendry of Imperial College London, who put forward the first design for an electromagnetic invisibility cloak, describes the latest results as “an impressive achievement”. He says that “control of sound has so far lagged behind its optical counterpart” but points out that a number of research groups are now building acoustic cloaks. Ulf Leonhardt of St Andrews University in the UK, who has also designed invisibility cloaks, emphasizes the simplicity of the new device. “I am always amazed how much one can do just by drilling some holes in plastic,” he says.

In an article accompanying the paper describing the work, Ross McPhedran of the University of Sydney in Australia and Alexander Movchan of Liverpool University in the UK say that Wegener’s group has “presented the clearest demonstration of effective cloaking in the literature to this point”. They note that the device not only leaves the passing wave pattern more intact than any other cloak built to date, but also that it covers the greatest bandwidth – one octave, which in terms of electromagnetic waves is more than would be needed to span the whole visible spectrum. They also say that the range of Young’s moduli within the cloak – three orders of magnitude – is “highly advantageous in bending waves round obstacles and would be difficult, if not impossible, to achieve in the corresponding optical situation”. Another advantage they highlight is that the device can be made using precision workshop equipment rather than requiring lithographic printing, as is necessary for many electromagnetic cloaks.

As such, they say, acoustic cloaks “provide an exciting test ground for new techniques to control waves in unprecedented ways” and could be used in applications such as sensing and communications technology. “An obvious possibility much further down the track is in control of seismic waves”, the pair told physicsworld.com.

Steven Cummer of Duke University in the US takes a similar view. “I can imagine that using this one might be able to engineer structures that can reduce vibration in critical locations, and thus might be made lighter or stronger than otherwise,” he says. “And that’s the sort of thing that would be valuable in a lot of different applications.”

The great life of Joseph Rotblat

By Hamish Johnston

rotblat2.jpg

This week’s episode of the radio programme Great Lives focused on Joseph Rotblat, the Polish-British physicist and peace campaigner who died in 2005. The format of the programme involves a discussion of the person’s life with a celebrity admirer – in this case the UK’s Astronomer Royal Martin Rees – and an expert on the subject. The latter was Rotblat’s friend and colleague Kit Hill, who is also a physicist.

Rotblat (pictured right, courtesy Nobel Foundation) was born to Jewish parents in Warsaw in 1908. He narrowly escaped the Nazi occupation in 1939 when he travelled to Liverpool to work at the university. From there, he went to the US, where he worked on the Manhattan Project to develop the first nuclear weapons.

He was the only scientist to quit the project for conscientious reasons – after seeing first hand how difficult it was proving to make a bomb, he concluded that the Nazis had no chance of succeeding and therefore the Manhattan Project was no longer purely a defensive act. Upon returning to the UK, he devoted his scientific career to studying the effects of radiation on living organisms.

In 1955 he joined forces with Albert Einstein, Bertrand Russell and other leading intellectuals to issue the Russell–Einstein Manifesto that alerted world leaders to the dangers of nuclear weapons and warfare. This led to the founding of the Pugwash Conferences on Science and World Affairs, which shared the 1995 Nobel Peace Prize with Rotblat himself.

The most intriguing question that the programme’s host Matthew Parris put to Rees and Hill is why Rotblat appeared happy to work on nuclear weapons when he knew that they could be used to kill Germans, but then recoiled from the idea when he realized that they looked destined to be used against other peoples?

You can listen to Great Lives here.

Hooke to hang in London

By James Dacey

Hooke portrait fin.JPG

There’s no doubt that during his time the natural philosopher Robert Hooke was something of an outsider, depicted by his chroniclers as “jealous” and “mistrustful”. But over the past three centuries, historians have come to realize that Hooke may well have been a far more important Enlightenment figure than first assumed. Today, in a further attempt to reposition Hooke’s place in the history of science, a new portrait will be unveiled in London at the headquarters of the Institute of Physics – which publishes Physics World.

Hooke was part of the group of natural philosophers that formed the Royal Society, becoming the first curator of experiments in 1662. Records show that during his career Hooke’s research spanned a gamut of interests, including biological organisms, gas experiments and the nature of light. But despite the range of his work, Hooke is only known for his eponymous law – which states that the extension of a spring is proportional to the force applied.

Hooke’s achievements began to be readdressed, however, around the tercentenary of his death in 2003, when several biographers re-explored his life and painted Hooke in a more favourable light. It was suggested that the credit for much of Hooke’s work ended up going to his contemporaries, including Robert Boyle and Isaac Newton.

These claims gathered further weight with the discovery in 2006 of Hooke’s folio in a Hampshire country house. In these notes, Hooke had detailed the minutes of meetings at the Royal Society during his tenure as curator of experiments. Among the revelations, the notes show that Hooke was the first to state that gravity causes the elliptical motion of the planets – an idea that Newton later developed into the famous inverse-square law.

The creation of this new portrait is hoped to further improve Hooke’s status. The picture will be unveiled today during a day of talks at the Institute’s headquarters that has been organized to commemorate Hooke’s life. The work has been produced by Rita Greer, a history painter, who has depicted the natural philosopher with a notebook and quill under his right arm. In his left hand is a spring to represent Hooke’s law of elasticity.

It’s a slightly eerie image, where Hooke appears in the moonlight with bags under his eyes and his well-documented hunched back. But Greer is committed to helping Hooke get the credit he deserves for his work. “Robert Hooke, brilliant, ingenious 17th-century scientist was brushed under the carpet of history by Newton and his cronies,” she says. “When he had his tercentenary, there wasn’t a single memorial to him anywhere. I thought it disgraceful as Hooke did many wonderful things for science. I have been working on a project to put him back into history where he belongs – up with the greats.”

Ad astra! To the stars!

An alien spacecraft scouting out Earth’s scientific prowess last September may well have zeroed in on NASA’s Kennedy Space Center in Florida. But the aliens might have learned more if they had flown some miles west to the 100 Year Starship Study (100YSS) conference in Orlando. There they would have seen that human space technology is limited, but in observing the event’s hundreds of attendees – from ex-astronauts and engineers to artists, students and science-fiction writers – the aliens would also have encountered humanity’s adventurous, stubborn, mad and glorious aspiration to reach the stars.

Maybe this desire to literally travel to the stars by spaceship arises because these distant suns have always seemed to offer a high and remote plane of existence. Aristotle in fact placed the fixed stars furthest from Earth – the centre of his cosmology – and nearest the Prime Mover that causes cosmic motion. The phrase “sic itur ad astra“, or “thus one goes to the stars” – from the Roman poet Virgil – refers to reaching divinity or immortality. But another phrase – “per aspera ad astra“, or “through hardships to the stars” – reminds us that they are not easy to reach, except in science fiction that sidesteps the difficulties caused by the vast distances the journeys would entail.

Now, with the exploration of the solar system by the US space agency NASA and others well under way, and with the discovery of hundreds of exoplanets orbiting distant stars, it may be time to contemplate the next great jump outwards.

“To boldly go” – but not yet

100YSS was the first conference to enable experts, enthusiasts and the general public to gather and seriously consider interstellar travel. Surprisingly, it was not sponsored by NASA but by the Defense Advanced Research Projects Agency (DARPA) of the US Department of Defense, which is also putting money into the effort. DARPA supports novel military science, and its willingness to look at seemingly “fringe” ideas has paid off in the past, although building a starship might seem beyond even its wide embrace.

But as pointed out at the meeting by DARPA’s David Neyland, who started and organized 100YSS, military and civilian applications have come from advances in robotics, materials and other areas developed for use in space. Having sponsored other space-related work as well, DARPA has faith that unimaginable new ideas will emerge from a project to design and perhaps build a starship within a century. Although it does not necessarily envisage reaching a star in that time, the project would have to draw upon the very best in science and society.

It is a cosmic irony that although we thought sending people and machines through the solar system was the hard part, it was actually the easy part, compared with what it will take to cross the huge void between us and the stars. Current propulsion technology moves a spacecraft at only 0.005% of the speed of light, or 0.00005 c. That means a trip of some 80,000 years even to Alpha Centauri, the star system nearest the Sun but hardly a close neighbour at more than four light-years’ distance. For comparison, the furthest travelling human-made object ever, NASA’s Voyager 1 spacecraft, has in the 34 years since its launch in 1977 penetrated just 0.002 light-years into space.

That speed of 0.00005 c can probably be improved but only to values still well below c, so a spacecraft aimed at near or distant stars would have to maintain its inhabitants for decades or millennia. Launching or even seriously developing such a miniature world would require a massive investment in research, and in material and human resources. But before we get caught up in these details we must first figure out and overcome the problem of propulsion.

Getting up to speed

A starship needs a rocket engine that efficiently develops thrust, because the craft must accelerate for a long time to reach high velocity. This runs headlong into a catch-22: long-term acceleration means a craft crammed with fuel, the mass of which resists acceleration and allows only a small payload. Chemical rockets such as the Saturn – which carried humanity to the Moon in three stages and burned a kerosene derivative or liquid hydrogen with liquid oxygen – just will not do. These produce high thrust but need lots of fuel to do so, giving a small push per kilogram of fuel.

This reasoning is quantified in a famous version of Newton’s third law, derived in 1903 by Russian rocket pioneer Konstantin Tsiolkovsky, which relates a rocket’s speed to its thrust and fuel load. Using appropriate parameters, the rocket equation delivers the coup de grâce: like a camel that cannot carry enough feed to nourish itself as it plods into the desert, a chemical rocket cannot possibly carry enough fuel to keep going and reach a respectable fraction of c.

To reduce the fuel load, rocketeers are therefore exploring more efficient fuels as measured by the energy per kilogram they supply. The most effective source is matter–antimatter annihilation, which sounds like science fiction and in fact does power the spacecraft in Star Trek. The advantage of this process is that it yields the maximum possible energy-to-mass ratio of c2 as it fully converts one into the other according to E = mc2. But since we have to date made only fractions of nanograms of antimatter, in CERN’s gigantic Large Hadron Collider, antimatter propulsion does not seem to be a real possibility.

Nuclear fuels are less efficient than matter–antimatter annihilation but still yield millions of times the energy density of chemical fuels. In nuclear-pulse propulsion, proposed in the 1940s, a starship would drop fission or fusion bombs behind itself and detonate them against a pusher plate to thrust itself forward. Indeed, in 1958, under DARPA’s Project Orion, Freeman Dyson of the Institute for Advanced Study designed a massive craft driven to 0.033 c by hundreds of thousands of thermonuclear bombs. It could supposedly have been built with technology then current, but fortunately the 1963 Nuclear Test Ban treaty prevented further development of this frightful brute-force method.

Fusion power and lasers

A more refined approach came in 1973 from the British Interplanetary Society (BIS), a private group of “spaceflight enthusiasts” founded in 1933. Its Project Daedalus explored the use of nuclear fusion in a reaction chamber to reach the stars within a human lifetime, though without any humans. Volunteer scientists and engineers designed an unmanned probe to examine Barnard’s Star, 5.9 light-years away, which supposedly had an orbiting planet (now known to be non-existent). The ship’s 53,000 tonnes were planned to consist mostly of deuterium and helium-3 fuel, with only 450 tonnes of payload; but at a speed of 0.12 c it would reach its target in 50 years.

Another approach, which amazingly needs no fuel at all, harks back to Johannes Kepler, who in 1619 correctly surmised that light deflects comets’ tails. Photons can push a sail to drive a spacecraft, as demonstrated in 2010 by the Japan Aerospace Exploration Agency (JAXA). After launch, its IKAROS spacecraft unfolded a 200 m2 sail and was accelerated by sunlight. (IKAROS stands for Interplanetary Kite-craft Accelerated by Radiation of the Sun – a play on Greek mythology’s Icarus, son of Daedalus, who flew too near the Sun.) Though the power of the Sun drops off with distance squared, a tight laser or microwave beam could push a sail harder and longer. According to one estimate presented at 100YSS, a laser with terawatts of power could bring a craft to 0.13 c.

These methods are under further study. In 2009 members of BIS and the Tau Zero Foundation, another private group, initiated Project Icarus to update fusion propulsion as proposed in Project Daedalus. But decades of scientific effort have yet to yield fusion that actually produces a net amount of energy, though laser inertial confinement, now being tested at the National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory in California, looks promising. As it happens, NIF also shows that beamed propulsion could be feasible since its lasers are planned to deliver terawatts of power. However, NIF is stadium-sized and cost billions, so a purpose-built terawatt beam source would be a major undertaking.

Faster than light

Even if these methods can be developed fairly soon, enthusiasts with bigger dreams would like to go beyond speeds of around 0.1 c and even exceed c. But since faster-than-light (FTL) travel violates special relativity, this is where reasonably solid propulsion science becomes speculative or “exotic”, as it is tactfully called.

Science fiction has long used exotic methods such as “warp drive”, which enables FTL travel in Star Trek but in fact originated much earlier. In 1931 John W Campbell (later to exert major influence as editor of the magazine Astounding Science Fiction) used the concept of distorted space–time from general relativity to introduce FTL travel in his story “Islands of space”. Its heroes enclose their spaceship in a warped “hyperspace” that allows it to move astoundingly fast, reaching Alpha Centauri in a mere fifth of a second.

General relativity really does in principle offer ways to evade the speed limit. In 1994 it inspired an FTL approach by theoretical physicist Miguel Alcubierre at the University of Wales that resembles Campbell’s method. His idea was to contract space–time in front of a spaceship and expand space–time behind it, creating a bubble that propels the craft at any speed without violating special relativity. Although the mathematics is impeccable, this seductive idea requires negative mass, which does not exist as far as we know, let alone in the astronomical quantities needed for an actual drive.

This and other approaches were examined in NASA’s Breakthrough Propulsion Physics (BPP) programme, which ran from 1996 to 2002 and sought new ways to make interstellar travel feasible. In 2008 the BPP’s director Marc Millis concluded that “no breakthroughs appear imminent”. Three years later, the Alcubierre drive, cosmic wormholes, quantized inertia and other exotica received the same verdict at 100YSS: James Benford of Microwave Sciences, who chaired and summarized the propulsion sessions, characterized the speculative methods as currently being “a bridge too far”. (The same can be said of quantum entanglement, which was presented at 100YSS as a potential means of FTL communication – contrary to current scientific understanding.)

Healthy, happy humans

For the foreseeable future, it looks as though we will be stuck with speeds near 0.1 c at most, with protracted interstellar travel times. So to deal with the distinct possibility that starship crews would have to function onboard for decades or more, 100YSS included sessions about alternatives to cramming people into a steel box for long periods, and about building “generation” ships if that proves necessary. Alternatives include suspended animation, and unmanned craft that could report back or carry the DNA and other resources needed to recreate humans on arrival at an exoplanet.

But sending complete people, while keeping them healthy, sane and motivated in a closed and isolated world (radio traffic with Earth would be delayed by a year each way for every light-year the ship travels) raises lots of issues. Some of these have been foreseen in science fiction, as in Robert Heinlein’s cautionary tale Universe from 1941. As the book’s blurb puts it, “Their world was a giant spaceship, its purpose and destination lost in centuries of drifting among the stars.” To make conditions even more dire, the cover shows two male crew members apparently in good shape and with nicely combed hair – except that one of them sports two heads!

Exaggerated though this is, damaging radiation that could produce mutations is just one of the problems to be faced in a long-term artificial environment. Along with propulsion, these would make planning, building and crewing a long-haul starship the most complex scientific project ever. Sessions at 100YSS considered how to manage such an effort, dealing with questions including how to elicit the best technology and where to find funding. To kick off the project, DARPA favours the private route: it is offering $500,000 to develop a “non-governmental organization for persistent, long-term, private-sector investment into the myriad of disciplines needed to make long-distance space travel viable”.

Should we or shouldn’t we?

Despite all the science at 100YSS, building a starship was more than once compared with constructing a great medieval cathedral over many years. After all, there is a certain religious or spiritual dimension to the fundamental question: why seek the stars?

Indeed, some speakers at 100YSS saw great spiritual benefits to interstellar travel. They included Anousheh Ansari, a businesswoman and the first Iranian in space, who felt transformed after her experience in 2006 as a private space traveller, and Thomas Hoffmann, a protestant pastor from Tulsa, Oklahoma, who saw travel to the stars as carrying religious feeling into a new sacred space. Others spoke of a “moral imperative” to start anew by escaping an industrial civilization that has despoiled our planet, or of a back-up plan in case of global disaster. And always, there is the part of the human spirit that would speed off to explore the universe simply “because it is there”.

The romantic quest has a strong pull, but given the obstacles, it is fair to ask whether the interstellar dream is actually a bridge too far. Attendees at 100YSS were true believers, but does the rest of humanity share the dream? To put the project in context, what definite need or benefit would convince private investors or governments to provide vast sums to reach the stars, especially amid economic uncertainty?

Yet, like building a cathedral, building a starship could rally humanity to join together in a common cause. And like the late Steve Jobs, perhaps a true visionary could discern the yearnings of millions and give them what they want before they know they want it – a starship, instead of iPads. But the visionary would also need an accompanying effort that replaces “ad astra” with a motto from the US Navy Seabees, the construction battalions known for doing what needs to be done in record time: “The difficult we do at once; the impossible takes a bit longer.”

White dwarfs eaten in supernova flare-up

Two astronomers in the US have found a Type 1a supernova that did not leave behind a surviving companion star after it exploded. Instead, they conclude that the supernova was caused by two white-dwarf stars colliding and then both stars being consumed in the conflagration. Although the observation is of just a single supernova, the discovery adds weight to the theory that at least some Type 1a supernovae are the result of such collisions.

Type 1a supernovae are short-lived stellar explosions that are called “standard candles” because they all seem to give off the same amount of light. This means that astronomers can use these supernovae to measure distances in the universe – indeed, the accelerating expansion of the universe, which was the subject of last year’s Nobel Prize for Physics, was discovered using such supernovae.

Despite Type 1a supernovae being incredibly useful, astronomers are still in the dark about certain aspects of how these explosions occur. Most agree that the conflagration takes place when a white-dwarf star gains enough mass to become more than 1.4 times the mass of our Sun – the so-called Chandrasekhar limit. At this point, the inward force of gravity overcomes the outward pressure of the star, which collapses and then explodes.

What is not clear, however, is where that extra mass comes from. The question is important because, depending on its progenitor, a Type 1a supernova that happened billions of years ago might not necessarily have the same brightness as one occurring more recently. If standard candles are not really “standard”, it would cause a headache for astronomers.

Pair of white dwarfs?

For many years the favoured theory was that the white dwarf captures gas from a companion star until the explosion occurs. However, computer simulations revealed several problems with the physics of this scenario. This led to a rise in the popularity of another progenitor model – a pair of white dwarfs that orbit each other until they get close enough to collide and then explode.

An important difference between these two progenitors is that the two white dwarfs are expected to be consumed in the explosion, whereas a companion star should survive the supernova. As a result, astronomers have been studying the remnants of Type 1a supernovae to see if they can spot surviving companions. The problem, however, is that most remnants are either too far away or in crowded parts of the sky, making it very difficult to work out whether a companion exists or not.

Now, Bradley Schaefer and Ashley Pagnotta of Louisiana State University have found the perfect candidate for such a study. Using an image taken by the Hubble Space Telescope (HST), they analysed a remnant called SNR 0509-67.5 in the nearby Large Magellanic Cloud. Because some of the light from this supernova is still bouncing around space, astronomers know that it occurred about 400 years ago and that was a Type 1a.

400 years of travel

The remnant itself is a bubble of gas expanding outwards from where the explosion took place. The bubble is not perfectly spherical, so Schaefer and Pagnotta combined the results of several different methods to determine the location of the supernova. Next, the pair worked out the maximum distance that a companion could have moved in the 400 years since the explosion took place. They did this by considering all the possible types of companion and then working out the maximum velocities that these stars would have after the explosion. Schaefer and Pagnotta also considered the effect that the explosion might have on brightness of the companion star – by blowing off some of its mass.

With these parameters in mind, they then looked very carefully at the HST image to see if there was any evidence of a companion star within the distance and brightness limits they calculated. None was found, not even at brightness levels several orders of magnitude below that expected for the dimmest of companions. This absence of a companion led Schaefer and Pagnotta to conclude that this particular Type 1a supernova could only have proceeded from the collision of two white dwarfs.

Describing the certainty of the result, Schaefer told physicsworld.com that “There is no wriggle room – this is a decisive result.” While the observation suggests that two white dwarfs could indeed be the progenitor of some Type 1a supernovae, it does not entirely rule out companions other than white dwarfs. Indeed, Rubina Kotak of Queen’s University Belfast in Northern Ireland believes that while the latest work provides important information about the origins of Type 1a supernovae, there are probably a number of progenitor systems in nature.

Unfortunately, given that ideal systems such as supernova remnant SNR 0509-67.5 are expected to be few and far between, it is likely that the debate will continue to be informed by computer simulations rather than observations.

The study is described in Nature.

Three new maps shine light on dark matter

Three independent teams of astronomers have released new and improved maps of where dark matter is lurking in parts of the universe. All three groups have charted the mysterious substance by looking at how its presence distorts the images of distant galaxies as their light travels to Earth. As well as providing further insights into dark matter, the studies could provide crucial information about another mysterious substance – dark energy.

About 95% of the mass/energy content of the universe is believed to comprise dark matter and dark energy – two substances about which physicists know very little. Dark matter cannot be observed directly but is believed to make up about 23% of the mass/energy in the universe. Its existence has been inferred from the gravitational tug that it exerts on visible matter such as galaxies. Dark energy, which is also invisible, is thought to account for about 72% of the mass/energy and its existence is inferred from the accelerating expansion of the universe.

Gravitational lensing

One team has used data from the Canada–France–Hawaii (CFHT) telescope to map the location of dark matter in four regions of the sky. The survey, known as CFHTLenS, includes about 10 million galaxies, which are all about six million light-years away. As the light from these galaxies travels to Earth, it is affected by the gravitational field of the dark matter that it passes along the way – a phenomenon called gravitational lensing. This distorts both the shape of the galaxies and their relative orientations as we see them on Earth – deviations that can be used to map the density of dark matter.

Observed over a period of five years, the four different patches of the sky were studied – each about 1° by 1° – using the MegaCam camera on the CFHT. The images cover a much larger area of the universe than a previous map produced by the team – which only covered 0.25° by 0.25°. The maps reveal that dark matter tends to clump around large clusters of galaxies – something that astronomers had expected but are unable to confirm in vast sections of the universe.

The team is now applying its analysis technique to data from the Very Large Telescope in Chile, which should result in much more of the sky being mapped. “Over the next three years we will image more than 10 times the area mapped by CFHTLenS, bringing us ever closer to our goal of understanding the mysterious dark side of the universe,” says team member Koen Kuijken of Leiden University in the Netherlands.

Shear brilliance

The other two dark-matter maps have been made by two independent groups, both of which claim to be the first to show that “cosmic shear” measurements can be unambiguously made by ground-based telescopes. Cosmic shear is a type of gravitational lensing that makes a distant object appear stretched – turning a circular image into an elliptical one, for example. By analysing the cosmic shear of images of distant galaxies collected over nine years by the Sloan Digital Sky Survey (SDSS), the teams were able to create dark-matter clumps.

The teams – one largely based at Fermilab and the other at the Lawrence Berkeley National Laboratory (LBNL) – have been able to improve their measurements by combining multiple snapshots of the same parts of the sky taken in the period 2000–2009. Known as “co-addition”, this process helps to reduce the effects of atmospheric distortion on the shear measurements and enhance the strength of signals from very distant and very faint galaxies.

The resulting dark-matter maps could be used to gain further insights into dark energy because dark energy should have an important effect on how dark matter is distributed in the universe – in particular how it tends to clump together.

“The community has been building towards cosmic-shear measurements for a number of years now,” says Eric Huff, who is a member of the LBNL team. “But there has also been some scepticism as to whether the measurements can be done accurately enough to constrain dark energy. Showing that we can achieve the required accuracy with these pathfinding studies is important for the next generation of large surveys.”

Introducing the ‘nano-ear’

Physicists in Germany have developed the first-ever “nano-ear” capable of detecting sound on microscopic length scales with an estimated sensitivity that is six orders of magnitude below the threshold of human hearing. The device is based on an optically trapped gold nanoparticle, and its inventors claim that it could be used to “listen” to biological micro-organisms as well as investigate the motion and vibrations in tiny machines.

Particles can be trapped in “optical tweezers”, which are formed when laser light is focused at a point in space. An electric dipole moment is induced in the particle and it is drawn to the most intense part of the laser’s electric field. The technique was discovered in the 1980s and is used routinely in research labs around the world. It is particularly useful for manipulating biological objects, since the optical field used to make the trap is non-destructive.

Now, a team led by Jochen Feldmann and Andrey Lutich at the Ludwig-Maximilians University in Munich has shown that a particle inside an optical trap can also be used as an extremely sensitive and minuscule sound detector. The researchers have found that the trapped particle can be made to move from its equilibrium position by vibrations from nearby sound waves. The frequency of the sound can then be calculated by analysing how much the particle has been displaced.

Sound sources

The team’s set-up consisted of two sound sources placed in a water-based medium. The first “loud” source is a tungsten needle glued on a loudspeaker that vibrates at a frequency of 300 Hz. The second, weaker source is made up of bunches of gold nanoparticles that are periodically heated by a second laser to create sound waves at a frequency of 20 Hz. The nano-ear is a 60 nm gold nanoparticle trapped in a 808 nm wavelength laser beam.

When either of the sound sources is turned on, the ensuing vibrations cause the trapped particle to move in the same direction as the propagating sound waves.

Ohlinger and colleagues used a video camera to track the motion of the trapped particle. They then tested how sensitive their nano-ear was by analysing the recorded trajectories of the particle. The result is a frequency spectrum of the sound sources superimposed on the frequency spectrum of the trapped particle’s Brownian motion.

Ultrasenstive detector of sound

The spectra reveal a clear, superimposed single peak at the frequency of the sound source. Further analysis reveals that the nano-ear can detect vibrations at a power level as low as –60 dB, which is six orders of magnitude lower than the threshold of a human ear.

According to the team, the device could be used to analyse the sounds made by live micro-organisms, such as bacteria and viruses. It might also be used to investigate artificial micro-objects that produce acoustic vibrations but that cannot be directly visualized in an optical microscope because of strong light absorption or scattering.

“We might even be able to develop a new type of ‘acoustic microscopy’ as it is possible to bring very sensitive sound sensors in the close vicinity of microscopic samples,” team member Alexander Ohlinger says.

The work is described in Physical Review Letters.

Help us to improve physicsworld.com

By Matin Durrani

survey

We like to think you enjoy what we do here at physicsworld.com, whether it’s our daily news, our blog, in-depth features or the videos, online lectures and podcasts.

But now’s your chance to tell us what you really think by taking part in our readers’ survey.

It won’t take more than a few minutes and for every entry we’ll make a donation to IOP for Africa, which aims to improve physics education in some of the poorest countries in the world.

Take the survey now and help us to improve physicsworld.com.

The deadline for comments is 31 January 2012.

Rolling microcapsules repair damaged surfaces

Researchers in the US have unveiled a new technique to repair nanometre-sized defects using oil-based microcapsules filled with a nanoparticle solution. The microcapsules roll or glide over a surface and stop to repair any cracks or imperfections that they encounter by releasing their nanoparticle cargo into them. They then move on to the next defect. The technique could have numerous practical applications in industry and research because it avoids the need to coat an entire surface when only a small fraction of it has been damaged. It might also be used as a precise way to detect damaged substrates by depositing sensor material into the regions of concern.

The technique was developed by two teams working in collaboration – one led by Todd Emrick at the University of Massachusetts and the other by Anna Balazs at the University of Pittsburgh. It was inspired by computer-modelling work done by Balazs. “Using computer simulations, she predicted that if nanoparticles were held in a certain type of microcapsule, they could probe a surface and release the nanoparticles into certain specific regions,” explains Emrick. “This concept applies particularly well to damaged surfaces, where the defective regions typically possess characteristics that are very different to the undamaged part of the sample – in terms of their topography, wetting properties, roughness and chemical functionality.”

Such a “repair-and-go” approach is inspired by naturally occurring biological mechanisms in the body. Leukocytes, for example, probe, identify and heal wounded or diseased tissue. It has also found use in medicine – cancer drugs are routinely encapsulated to ensure that they preferentially permeate into “leaky” cancer tissue rather than healthy surrounding tissue, says Emrick.

Rolling capsules

Using a polymer surfactant that stabilizes oil droplets in water, the researchers encapsulated cadmium–selenide nanoparticles in such a way that the particles can be released when desired. This was possible because the walls of the capsules are very thin – about the same size as the diameter of the nanoparticles themselves. The researchers then found that the capsules roll or glide over damaged substrates and selectively deposit their nanoparticle contents into the damaged or cracked regions thanks to hydrophobic–hydrophobic interactions between a nanoparticle and the cracked surface. Cadmium selenide is fluorescent, which means that the nanoparticles can easily be tracked using an optical microscope.

“Our research could have numerous practical applications,” says Emrick. “For one, it could help massively lower the amount of material required when repairing a damaged object or sample, thus avoiding the need to coat an entire surface when only a very small fraction of it is damaged.”

The technique could also be exploited as a precise method for detecting damaged substrates, by depositing sensor material into the regions of concern, he adds.

“Looking forward, this rapid and efficient coating mechanism might come in useful for repairing a wide range of objects – from aeroplane wings to microelectronic components and biological implants,” says Emrick. “Having realized the concept experimentally, we now plan to demonstrate how the mechanical properties of coated objects can be recovered by adjusting the composition of the nanoparticles being delivered.”

The work will be described in Nature Nanotechnology.

Feast of physics on BBC radio

By Hamish Johnston

Physics lovers in the UK were enthralled this morning as two of the nation’s greatest physicists – Stephen Hawking and Isaac Newton – were featured on BBC Radio 4.

First up was Hawking, who answered five of the many questions submitted by listeners of the Today programme in honour of the cosmologist’s 70th birthday.

Questions that Hawking chose to answer included those on the origins of the universe, faster-than-light neutrinos and the colonization of space. You can listen to his responses here.

Newton featured on Radio 4 this morning in the final instalment of a series on the history of the written word presented by another national treasure, Melvyn Bragg. In today’s episode Bragg explores the role that writing has played in the development of science. Indeed, the programme argues that science emerged shortly after writing itself, as astronomers in ancient Mesopotamia began to record the positions of stars with the aim of predicting stellar positions in the future.

About halfway through the programme, Bragg travels to the library of the University of Cambridge to look at the student notebooks of Isaac Newton. One book contains a graphic description of how Newton pushed a wooden needle into his eye socket and recorded what happened when the needle distorted the shape of the back of his eye – that’s got to hurt!

You can watch a slideshow about Bragg’s series on the written word here.

Copyright © 2026 by IOP Publishing Ltd and individual contributors