A recent Guardian article in its “Improbable Research” series drew my attention to the Journal of Special Topics, produced by undergraduate physics students at the the University of Leicester.
The Guardian article focused on a paper discussing the feasibility of playing football on Mars and how gravitational and environmental differences would affect the game. Not being a big football fan myself, what really had me amused was the mention of another paper called “Determining the smallest migratory bird native to Britain able to carry a coconut”. Fans of Monty Python and the Holy Grail will remember King Arthur suggesting that the coconuts in his possession could have been carried by a bird from the tropics and the ensuing debate about which migratory bird actually could carry said coconut. The authors found that the only bird that fits the bill (sort of) is the white stork.
Upon digging around in the journal’s archive, I found a few other gems…
If you enjoyed the film Up, you might want to peruse the paper that asked how many helium balloons would actually be required to lift a small wooden house – like the one in the film – as well as a common brick house found in the UK. The authors found that it would take almost 10 million helium balloons to lift the small wooden house and 400 million helium balloons to lift a typical UK house! They note the downfalls of this particular method of relocation though, by concluding that the balloons would “deflate very quickly at high altitudes” and that the “foundations and drainage of the house would be removed, making the structure very unstable, if by some miracle the journey is possible.” Pity, balloons would make moving so much easier!
Fans of the American sitcom The Big Bang Theory may recall a character attempting to “See how long it takes a 500 kW oxygen-iodine laser to heat up my Cup-A-Noodles”. And later claims the necessary time is two seconds. In the paper “The Pot Noodle Proposal”, the authors conclude that, while it is possible to use the laser to heat the noodles, “the heat transfer process is very ineffective” because the pot would melt long before the noodles would heat up. So physics labs will have to hold on to their microwave ovens!
Another paper calculates just how fast an average ball point pen can write on paper and how temperature and so, geological location affects the speed achieved. The paper, titled “How fast can a pen write”, found that the speed is approximately 153 m/s at room temperature, with maximum speeds of 181 m/s and 192 m/s at Sahara-like and Siberia-like temperatures respectively. While the authors acknowledge that these speeds could not be humanly achieved, they point out that the ink viscosity, determined by the temperature, is the “bottom line”.
The last paper I will mention is one that had me in splits. “How radioactive is a banana?” looks at just how harmful the radioactivity of a banana can be. But wait a minute, you say, bananas are not radioactive, are they? Bananas contain potassium – which has a naturally occurring radioactive isotope, potassium-40. Luckily all you banana lovers can rest easy as the authors found that “a person would have to consume more than 37 billion bananas to cause any risk of death” from radioactivity alone. Also, “even surrounded by bananas, it would take over a billion to cause any harm”. So no need to stop gulping down that banana smoothie in the morning just yet.
For papers like these, which are probably being penned by future Ig Nobel prize winners, take a look these and other papers – they are sure to make you laugh and think!
The recent Fukushima disaster has raised renewed concerns about the safety of nuclear power, with many countries calling for stricter checks in nuclear power plants. Now, researchers at the Berkeley Lab, the University of California at Berkeley and the Los Alamos National Laboratory, all in the US, have developed a new nanoscale testing technique for irradiated materials that provides information on the strength of these materials on much larger length scales. The method could greatly reduce the amount of material required in tests and also help in the design of new, improved materials for nuclear applications.
Nuclear power came to the fore in the late 1950s and 1960s, and many nuclear power stations were built around the world at that time. Nuclear power currently generates about 20% of the world’s electricity, including 50% in western Europe and as much as 80% in France. Beyond power generation, many nuclear facilities are also used for research or for producing medical radioisotopes for use in hospitals and clinics.
The structural materials used in these facilities are regularly tested to ensure safe operation but engineers are always on the lookout for simpler, less invasive tests that cause minimal disruption. The new technique, devised by Andrew Minor and Alex Zettl’s teams in California, can be performed on samples as small as 400 nm and yields real mechanical strength data on the macroscale for the material being tested. This is a first because, previously, nanoscale mechanical tests always seemed to give higher strengths than the macroscale bulk values, explains Minor.
Deforming atomic planes
The researchers obtained their results by performing compression tests (using a flat diamond punch) on copper samples that had been irradiated with high-energy protons at 1.1 MeV. These are the sorts of energies experienced by real working materials employed in nuclear power plants. The tests were designed to model how damage from radiation affects the mechanical properties of the metal. Using an in situ mechanical testing device in a transmission electroscope, the team was able to observe the material deforming on the nanoscale, and found that the deformation was limited to just a few atomic planes.
Before the compression tests, the samples contain a high density of small 3D defects. This is typical of irradiated samples. After compression, the researchers saw short 1D dislocation segments as well as some of the original 3D defects.
The 3D defects in the copper can block the motion of the 1D dislocations, something that makes the material more brittle and susceptible to fracture at lower loads. The team confirmed this by performing load tests on irradiated and non-irradiated samples and comparing the two.
Translating to the bulk
Peter Hosemann of UC Berkeley says that, by translating the nanoscale strength values obtained into bulk properties, the technique could help reactor designers find better materials for use as engineering components in nuclear plants. “And by using a smaller specimen for the tests, we limit any safety issues related to the handling of the test material,” he said. “We could potentially measure the exact properties of a material used in, say, a 40-year-old nuclear facility to make sure this structure lasts well into the future.”
Understanding the role of defects on the mechanical properties of nuclear-reactor materials will be crucial for designing materials that are more resistant to radiation damage, and lead to more advanced and safer nuclear technologies, added Minor.
The work was reported in Nature Materials doi:10.1038/nmat3055.
How do they do it? Why giraffes do not suffer an aneurysm every time they bend down to drink is one of the animal kingdom’s lingering physics puzzles. (Courtesy: iStockphoto.com/josejuanfotos)
Wildlife biology for physicists
In engineering terms, animals are a mature technology. They are well adapted for the ecological niches they fill and, for the most part, they do a rather good job of surviving in an unforgiving world. But what really makes them tick? This is the central question addressed in Engineering Animals, a remarkable book by two authors, Mark Denny and Alan McFadzean, who trained as physicists before pursuing careers as engineers, and who have spent the past two years immersing themselves in biology. Appropriately, the book begins with a thermodynamics-inspired discussion of how animals use energy. Readers familiar with the second law will probably not be surprised to learn that the “food chain” linking prey species and predators is hugely inefficient, with each prey organism providing only 10% of its energy to whatever eats it. This simple observation, however, has some interesting consequences. Among other things, it underlies certain patterns in animal behaviour, including the fact that small carnivores (such as cats) eat smaller animals (mice), whereas large carnivores (such as lions) eat things that are roughly their own size (wildebeest). Energy considerations also feed into rules about animal size and shape, including “Allen’s rule” that animals living in cold climates are rounder than those in warm ones, because organisms with high volume-to-surface-area ratios retain body heat better. Allen’s rule was articulated in the 19th century, but the book also ventures into current debates in biology, such as arguments about why giraffes do not suffer aneurysms when they lower their heads to drink, whether systems that distribute resources around an animal’s body are fractal-like, or why migrating birds fly in a V-shape (simple energy savings do not quite cover it, apparently). Written in a light and engaging style, but with plenty of references and footnotes, Engineering Animals is perfect for physicists who, like your reviewer, abandoned formal studies in biology at an early age and have always wondered what they missed.
2011 Harvard University Press £25.95/$35.00hb 400pp
How humans work
From the physics of animals in general, we now move to the physics of one particular animal: humans. Unlike the previous book, Physics of the Human Body is intended for medical students and professionals rather than physicists. But although most of the physics concepts in the book are familiar, a great many of the examples used to illustrate them are not. The section on forces and torques, for instance, eschews abstract rods and levers in favour of the human musculoskeletal structure. This leads nicely into a discussion about how much weight an average person can lift without injuring themselves – a topic that is certainly of practical interest to many experimental physicists, even if the physics of it is not particularly interesting. This pattern of introducing a physics concept, then concentrating on its medical applications, continues through most of the book. The coverage of pressure, for example, places a strong emphasis on how pressure-related ideas play out in the human circulatory system, as well as in organs such as the lungs, eye, brain and bladder. The exception to the rule of “physics first, medicine later” occurs in the final chapter, where author Richard McCall instead uses a medical concept – drug delivery and absorption – to illustrate how physicists model complex problems. McCall has obviously worked hard to make the physics interesting and relevant for medically minded readers, and as a physics lecturer at the St Louis College of Pharmacy in Missouri, US, he has had plenty of opportunities to field-test this approach. Anyone who teaches similar students – or who simply wants to vary the examples they use in introductory physics courses – should look at his book.
2010 Johns Hopkins University Press £23.50/$45.00pb 312pp
The physics of va-va-vroom
Fast Car Physics is not a book for automotive novices. If you do not know the difference between a dyno torque curve and a g-g diagram, you will find some of its chapters hard-going. If you’ve never heard of either, you had best steer clear altogether. Fortunately for the book’s publishers, the overlap region in a Venn diagram of “people who like physics” and “people who like cars” is large. Moreover, readers who enjoy debating the relative merits of the Subaru WRX STi and the Nissan 350Z – and then plotting graphs to prove their points – will definitely find a kindred spirit in author Chuck Edmondson. A physicist at the US Naval Academy, Edmondson’s other passion is car racing, and he has extensive experience in combining the two: not only has he taught a course in automotive physics, he has also raced in an amateur team with his son and daughter. His book is pretty comprehensive, taking in everything from the factors that restrict 0 to 60 mph times to the materials science of Formula 1 tyres, plus a meaty final chapter on “green racing”. As Edmondson points out, oil shortages and growing concerns about the environment have not reduced interest in motor sports. The challenge for a green-minded homo automotives, then, is to find a way of combining speed and environmental friendliness. None of the possible solutions explored in this chapter (electric cars, hybrids, alternative fuels, etc) seem to hold all the answers, but it is encouraging to know that some racers are at least thinking about the problem.
2011 Johns Hopkins University Press £15.50/$29.95sb 248pp
Antony Schinckel (centre right) and Barry Turner (centre left) with a finished ASKAP dish
By Matin Durrani in Boolardy, Australia
Our eight-seater plane landed safely on the sandy red airstrip at the remote Boolardy homestead deep inside Western Australia.
There to meet us was Barry Turner, site manager for the Murchison Radioastronomy Observatory, which is currently building two key astronomy facilities here in the Australian outback – the Murchsion Widefield Array and the Australian Square Kilometre Array Pathfinder (ASKAP) project.
ASKAP will, when complete next February, consist of 36 parabolic antennae, of which six are currently built. Apart from being 10 times more powerful than any existing radiotelescope in the world, ASKAP is also designed to show that Boolardy is a suitable location for Australia’s bid for the Square Kilometre Array – an even larger radiotelescope array that will, when complete, consist of some 3000 dishes.
ASKAP astronomers, including project director Antony Schinckel, naturally think their site is a worthy location for SKA, although they are at pains not to discuss or comment on the rival SKA bid from various nations in southern Africa.
After lunch and the obligatory safety briefing, which included warnings about possible venomous snakes, Schinckel drove us off by 4×4 van to the telescope site, which was bathed in pleasantly warm winter sunshine. (Summers, in contrast, can easily rocket above the 40 °C mark.)
On site were various constructions workers digging roads and levelling the site, but it was also interesting to see staff from the Chinese Electrical Technology Corporation, which is making the ASKAP dishes in China, shipping them to Australia and then building them here in the outback.
Their presence explains the Chinese menus in the Boolardy lunchroom, although quite what they make of Australia’s local delicacy – Vegemite – I am not sure.
The six completed dishes consist of a supporting structure that will house electroncs cables, topped by a steerable 12 m-diameter dish. So smooth are the dishes that they are no more than 0.6 mm out from a perfect parabolic shape.
Barry and Antony gave us a detailed run-down of the dishes, but with the Australian sun quickly setting, it was imperative that we did not hang around for too long so that we could fly away before sunset. By nightfall, the unlit landing strip would be so dark that taking off would be impossible.
And soon we were soaring above the ASKAP site for the 90-minute flight back to Perth. A fitting and illuminating end to my week-long trip to Australia.
Schinckel joined us on the flight. While my fellow European science journalists and I are to return to Europe, for Schinckel the work goes on. He and numerous other members of Australia’s SKA bid team are flying to Banff in Canada next week for a high-level discussion meeting where its bid – and that from southern Africa – are to be evaluated.
I get the feeling the Australian team is pretty confident of winning the SKA bid but, as I said earlier, they resolutely refuse to get drawn on the matter. We shall see who wins when the final decision is announced on 29 February next year.
If you remember your Greek mythology, you will recall that Athena, daughter of Zeus and goddess of battle and wisdom, was an extremely potent being. But even she sometimes had to behave prudently. So, in the Iliad, when she intervened on the Greek side in the Trojan War, she donned the Cap of Invisibility to conceal herself from her combative half-brother, the god Ares, who favoured the Trojans.
The seductive power of invisibility to conceal what needs to be hidden is an old fantasy, but this dream is now becoming real through technology. Novel methods are being developed to control visible light and electromagnetic waves in general – for instance, the use of artificial structures called metamaterials. The resulting “invisibility science” offers the possibility of true invisibility under visible light, as has already been nearly achieved for radar wavelengths using stealth technology. But it has also led to some unexpected outcomes such as vastly improved optical lenses, while related research offers further possible applications in controlling seismic, sound and ocean waves.
For good or evil
Despite the hi-tech nature of today’s invisibility science, it is remarkable that speculative writing and fantasy have foreseen some of its methods, even if the new science brings its own surprises. Fantasy also shows that invisibility can be a bad thing, as was understood long ago. In Plato’s Republic, written in around 380 BC, Glaucon tells how the shepherd Gyges, made invisible by a magic ring, seduces the king’s wife and murders the king – illustrating that a fear of being discovered and punished is the basis of moral behaviour. If people could turn invisible, says Glaucon, “No man would keep his hands off what was not his own…[a man could] go into houses and lie with any one at his pleasure, or kill…whom he would, and in all respects be like a god among men.”
That tinge of wickedness appears in later tales as well. Millennia after the story of Gyges, in Richard Wagner’s 19th-century The Ring of the Nibelung, the grotesque dwarf Alberich dreams of world domination using a ring of power fashioned from Rhine gold along with the Tarnhelm, a magical helmet of invisibility. This is echoed in J R R Tolkien’s 20th-century classic The Lord of the Rings when Gollum, another grotesque creature, is corrupted by a magic ring of power that bestows invisibility. One exception to the dark side, however, is the young wizard Harry Potter, who makes virtuous use of the cloak of invisibility he receives in the first book in J K Rowling’s series, Harry Potter and the Philosopher’s Stone (1997).
From myth… Left: a scene from Wagner’s The Ring of the Nibelung in which Alberich has just vanished, to the horror of his brother Mime, left behind. Right: still from Memoirs of an Invisible Man (1992) starring Chevy Chase and directed by John Carpenter. (Courtesy: Warner Bros/Canal Plus/Regency/Alcor/The Kobal Collection)
You might think that fictional invisibility produced by “science”, not magic, is less harmful, but that is not necessarily the case. In the late 19th century scientific invisibility became a literary theme in The Crystal Man (1881) by Edward Page Mitchell, The Invisible Man (1897) by H G Wells and The Shadow and the Flash (1903) by Jack London. Researchers in these stories anticipate great benefits as they seek invisibility in the laboratory. Eventually, however, delusions of grandeur and unforeseen consequences bring each of them to a bad end. On the other hand, the “cloaking device” introduced in Star Trek’s first television season (1966) does not provide personal invisibility, but is the ultimate camouflage for the Warbird spaceships of the Romulan Empire.
These fictional methods all make some scientific sense. Griffin, Wells’ Invisible Man, says “Either a body absorbs light or it reflects or refracts it, or does all these things. If it neither reflects nor refracts nor absorbs light, it cannot of itself be visible.” Scientific methods make Griffin, and Stephen Flack of The Crystal Man, transparent with a bodily refractive index matching that of air. Both of them become “in the air like a jellyfish in the water. Almost perfectly transparent…”, as Flack describes it. In a different approach, chemist Lloyd Inwood of The Shadow and the Flash, who is the dark Shadow in opposition to the invisibly transparent Flash, is coated with a pigment that absorbs all light, and becomes invisible except for casting a shadow and obscuring objects behind him.
In Star Trek, however, the Romulan cloaking device relies on a different principle, which is actually the same one that is used for the deflector shields that protect spacecraft such as the USS Enterprise from enemy phaser beams. The shields use gravitons – the hypothetical elementary particles that carry gravity – which, in Einstein’s general relativity, arise from the shape of space–time. This suggests that the shields and cloaking devices both work by distorting space–time to divert phaser and light beams around a spacecraft. A similar approach works in the real world, but not by applying general relativity. Rather, light is made to curve around an object by means of artificial metamaterial structures.
Under the radar
Although metamaterials are products of advanced technology, the roots of invisibility have a simpler origin: when armies abandoned gaudy uniforms such as those that the British Redcoats wore. For example, British troops in India adopted the earth-tone khaki in 1848. Later that century the US artist Abbott Handerson Thayer became the “father of camouflage” when he distinguished two strategies for protective colouration in animals: blending, where the subject is indistinguishable from the background; and disruption, where “strong arbitrary patterns of colour” break up the subject’s outline. Thayer’s ideas came into their own in the First World War and camouflage has been used in every conflict since, from uniforms that merge soldiers into the background to bold “dazzle” patterns that made warships more difficult to target. But it took modern air warfare to inspire something nearer to true invisibility, though it works under illumination by radar rather than visible light.
This “stealth” technology for warplanes is possible because a radar installation works like a lighthouse, except that it sweeps around a beam of short-wavelength radio waves. Encountering an aircraft, the beam is partly absorbed and partly reflected or scattered in various directions. What returns to the origin is detected and analysed to locate the target. Warplanes are innately visible to radar because of their reflective metal construction and protruding parts. Indeed, the US Air Force B-52 Stratofortress bomber, which was used during the Cold War, had such a huge radar cross-section of 125 m2 that it was picturesquely described as the “side of a barn”. As shown dramatically in the film Dr. Strangelove (1964), the classic dark comedy about nuclear war, a B-52 would have had to fly dangerously low to avoid enemy radar.
An aircraft’s radar image can, however, be reduced by shaping the craft to minimize scattering back along the incoming beam. Scattering calculations have been carried out since James Clerk Maxwell derived his seminal electromagnetic equations in the mid 19th century, but it was not until the 1960s that the Soviet scientist Pyotr Ufimtsev developed methods to determine scattering from complex geometries.
In 1975 engineers at Lockheed Aircraft’s Advanced Development Projects used this approach to design the F-117 Nighthawk stealth fighter. First flown in 1981, it had a strange, sharply faceted shape like a Cubist painting or a cut diamond. The design produced a minute radar cross-section of 0.02 m2, but seemed so utterly non-aerodynamic that the prototype was called the Hopeless Diamond. The F-117 proved inherently unstable in flight and was prevented from crashing only by using computers to constantly adjust its control surfaces to counteract destabilizing forces. This “fly-by-wire” approach is still essential for newer stealth aircraft such as the F-22 Raptor, first flown in 2005.
…to reality Left: the F-117 Nighthawk, the original stealth plane. Right: Susumi Tachi’s “optical camouflage” jacket. (Courtesy: Susumi Tachi)
The F-117’s tiny cross-section also owes something to absorptive invisibility, such as in The Shadow and the Flash: stealth aircraft are coated with radar-absorbing material to reduce scattering. Unfortunately, the absorbed energy slightly heats the aircraft, thus increasing its infrared emission and making it more vulnerable to missiles – a reminder that invisibility in one part of the spectrum is no guarantee of invisibility at other wavelengths. Still, stealth technology is a great success. Exact figures are hard to come by, but it is said that the latest stealth aircraft have radar cross-sections ranging in size from a golf ball to a large insect, making them hard to distinguish from “blips” generated by small natural sources or electronic noise. This technology continues to develop, as shown in news reports that stealth helicopters, a previously unknown part of the US military arsenal, were used in the recent raid on Osama bin Laden’s compound in Pakistan.
Going retro
Another approach from those early stories – perfect transparency – seems unlikely to ever be achieved for people and things. But in 2003 a team led by Susumu Tachi, who was than at the University of Tokyo, produced a kind of virtual transparency called retroreflective projection technology, or optical camouflage. It came about in response to a problem found in virtual-reality applications that combine real environments with virtual or computer-generated ones. In such cases, the virtual image is projected from the viewpoint of the observer onto a real scene. The difficulty comes if a projected image, such as distant mountains, encounters a real object, say a wall. The mountains should appear to loom up behind the wall, but the projected image instead falls on the front of the wall, thus disrupting one of the many depth cues that make a scene look authentic.
In considering this depth-cue problem, Tachi’s group came up with a way to make a real, solid object appear transparent so that what lies behind it can apparently be seen through the object. The first step is to make the real object reflect incoming light back out towards the source, or “retroreflect”. This is what makes cats’ eyes glow eerily in car headlights, and it can be achieved by applying paint or material containing small glass beads. Then a camera records what lies behind the object and that image is projected off a partially transmitting mirror onto the object. A viewer looking through the mirror, if properly placed, sees the real object with the image of what lies behind the object superimposed on the object.
The result is a striking illusion of transparency, as shown in videos from Tachi’s group. Busy street scenes seem to be visible right through models wearing retroreflective jackets and cloaks. News coverage at the time hailed Tachi as the inventor of true invisibility, but the illusion is imperfect, since the background image is slightly dimmed and the garment’s outline is visible. More seriously, the optical equipment must be set up in advance and the observer fixed in the correct location, looking through the mirror, otherwise the illusion fails. Still, a 2007 report for the Canadian military notes that optical camouflage could hide a tank, say, by making it effectively transparent, though that would work only for a parked tank when viewed from a particular location.
Tachi’s group, now at Keio University, has also developed the “transparent cockpit” to give drivers a full view of a vehicle’s surroundings. External cameras on the vehicle send images to a computer. The driver wears a projector that sends the images, altered as if seen from the driver’s position, onto retroreflective coatings on the car’s interior surfaces. An arresting demonstration from Tachi’s group shows adjacent traffic apparently seen right through an automobile’s dashboard and door. Better yet, the inventors imagine a helicopter pilot gazing through a transparent cockpit as if soaring along in thin air – a real-life Invisible Plane as used by comic-book superhero Wonder Woman.
To invisibility…
The most stunning form of true invisibility, however, is much more elegant: light is bent to travel around an object rather than interact with it, emulating Star Trek’s cloaking device. Like water in a stream splitting around a rock and rejoining into a smooth flow downstream, light rays divert around the object, then recombine and continue on as if they had never been disturbed.
Like water in a stream splitting around a rock and rejoining into a smooth flow downstream, light rays divert around the object, then recombine and continue on as if they had never been disturbed
This came from an old idea. In 1964 Victor Veselago of the Lebedev Physical Institute in Moscow theorized about electromagnetic waves in a medium with an unheard-of property: a negative refractive index. The refractive index, n, is the ratio of the speed of light in vacuum, c, to its speed in the medium. This is always less than c, so n is greater than 1 in all known media. But Veselago found that n would be negative if a medium’s permittivity, ε, and permeability, μ, which determine its electric and magnetic responses, respectively, were both negative.
Such a medium, he predicted, would refract light in an opposite-to-usual direction, so a concave lens would focus light rays and a convex lens would diverge them; the medium would be pulled towards a light source instead of being pushed away by radiation pressure; and it would support an inverted Doppler effect, with light becoming blue-shifted rather than red-shifted as a source moves away from an observer.
The theory remained untested until 2000, when researchers at the University of California, San Diego, built a metamaterial that met Veselago’s criteria. It was made of hundreds of millimetre-sized units arranged in a repeating pattern, like atoms in a crystal. There were two types of units: copper strips, where a large complement of electrons acted to generate ε < 0; and copper C-shaped split-ring resonators, in which currents induced by the incoming light produced magnetic effects that gave μ < 0. At a microwave wavelength of 3 cm, the metamaterial yielded n = – 2.7 and displayed backwards refraction.
This showed how to create values of n not found in nature and, in turn, how to make things invisible. In 2006 the theory of cloaking was published simultaneously by Ulf Leonhardt of the University of St Andrews and by John Pendry of Imperial College London with David Smith and David Schurig of Duke University in the US. The latter group calculated the refractive profile of a hollow spherical shell that would intercept incoming light rays, bend them into the shell and through it, then refract them back out along continuations of their original paths. The required refractive index varied within the shell, in some places taking on exotic values of less than 1 or even less than zero.
How it all began The first metamaterial cloak, built in 2006 by a team led by David Smith of Duke University. (Courtesy: Duke University/David Schurig)
The same year, Smith’s group at Duke including Schurig, along with Pendry and others, used this theory to build the first metamaterial cloak. Its 10 concentric rings, 5.4–11.8 cm in diameter, contained thousands of copper split-ring resonators with dimensions that varied across the rings to give the correct spatial behaviour for n. Uncloaked, a 5 cm-wide copper cylinder strongly scattered 3.5 cm microwaves and cast a shadow; but when the object was placed within the cloak’s central opening, its scattering and shadowing were reduced to almost nothing. Images of the microwaves show that they entered the cloak, split so as to bypass its central opening, and recombined to emerge from the opposite side in the same conformation as when they entered, just as calculated.
Unsurprisingly, this success attracted vast public interest, aided by hyped headlines such as “Harry Potter invisibility cloak ‘within five years’ ”. Scientific interest ran high too, and researchers have now cited this work some 800 times. Such intense activity raises the prospect of us overcoming the considerable barriers to achieving invisibility for the visible wavelengths of 400–750 nm. One challenge is that metals such as copper are highly absorbing there, so the cloak itself would cast a shadow. Another is that the metamaterial elements need to be smaller than the wavelength, which for visible light means nanostructures rather than the millimetre-scale elements in the microwave experiments. A third issue is that split-ring resonators give μ < 0 only at specific wavelengths. This is another reason why it has been difficult to make objects invisible across the electromagnetic spectrum, including its visible portion.
One breakthrough in optical invisibility, however, is the “carpet cloak” invented by John Pendry and Jensen Li in 2008 – a metamaterial that needs only a spatially varying n, without resonator rings, to make a bump on a surface appear flat, therefore hiding an object beneath. In 2009 carpet cloaking was demonstrated with microwaves and also for near-visible infrared light of around 1500 nm, but only for microscopic objects. In late 2010 two different groups presented another approach. Almost simultaneously, they reported the achievement of invisibility for millimetre- to centimetre-sized objects over the visible range from red to blue. Remarkably, these results did not require complex metamaterials, using instead the anisotropic optical properties of the naturally occurring crystal calcite to produce a form of carpet cloaking.
…and beyond!
True invisibility science is only five years old. But now that scientists know that invisibility is possible, they are moving into top gear and military applications guarantee research funding. Apart from cloaking, and regardless of whether simpler approaches such as the use of calcite develop further, metamaterials are yielding advances unforeseen in fantasy or science fiction, such as “superlenses” with resolutions far better than ordinary lenses. These improved lenses could, for the first time, make it possible to see viruses and proteins directly with visible-light microscopy. They could also lead to more effective computers, because the number of devices that can be packed onto a computer chip is limited by the resolution of the optical system that lays it out. Other possibilities include using acoustic superlenses to improve medical ultrasound scanning, and using metamaterial-like principles to make buildings “invisible” to destructive seismic or ocean waves.
Getting real Scanning electron micrograph of the carpet cloak created in 2011 by a team led by Jingjing Zhang of DTO Fotonik, Denmark. (Courtesy: Optics Express)
There is another route as well to human invisibility – a garment that goes beyond Tachi’s retroreflective projection technology. This true invisibility cloak would collect a real-time image from sensors on its back and project that image from devices on its front. The cloak would seem transparent even as it or an observer moves. In 2002 Franco Zambonelli and Marco Mamei of the University of Modena and Reggio Emilia in Italy proposed that a cloak densely covered with light detectors and emitters 5 μm across, wirelessly interlinked at terabyte rates, could dynamically maintain a convincing illusion that it is not there. They suggested that the cloak’s movements could power the network via piezoelectric devices that convert mechanical strain into voltage, although power could even come from the wearer’s own body heat. Such a cloak with an area of 3 m2 would cost less than €500,000, they suggested, and while we do not yet have devices of the requisite size and capability, one hope lies in the small light detectors and emitters made from the nanometre-scale semiconducting units called quantum dots. These can be formed in large arrays, and one method tailor-made to produce a cloak has already been successfully tested: putting the dots in solution and spraying them onto cloth through an inkjet printer.
The goddess Athena, the Invisible Man and other mythological and fantasy characters could not have imagined an assembly line of inkjet printers spraying invisibility technology onto cloth; anyone familiar, however, with the “chameleon” camouflage suits of military science fiction – such as those worn by the 25th-century Space Marines portrayed in the Star Fist series by David Sherman and Dan Cragg – would not be surprised. And if, as seems inevitable, the price of a mass-produced invisibility cloak were eventually to drop from €500,000 to, say, €19.95, we would finally have to face the question that Glaucon posed so long ago: if questionable acts became widely undetectable, could morality survive?
This article first appeared in the July 2011 issue of Physics World. Since then a 3D invisibility device that uses lenses and works for different angles of view has been created.
The way galaxies have been classified for decades has been questioned by an international team of astronomers. After revealing that two-thirds of local elliptical galaxies are actually fast-spinning discs, the team has suggested that the Hubble “tuning fork” – the long-standing method for classifying galaxies – may need retuning.
Galaxies come in all shapes and sizes: from flat spinning discs to almost-stationary blob-like elliptical galaxies. However, accurately classifying these huge objects can sometimes be tricky due to the angle from which they are observed. When seen face-on, older disc galaxies that have lost their distinctive dust lanes and spirals can masquerade as equally featureless, but spherical, elliptical galaxies. Elliptical galaxies are thought to have very little net rotation whereas disc galaxies rotate much faster. Measuring their rotation speed can therefore help distinguish between them.
Such a test has been performed using the ATLAS3D survey, led by Michele Cappellari at the University of Oxford, UK. The survey consists of 260 non-spiral galaxies in the nearby universe. “We divided each galaxy up into a grid and took spectra for each individual section,” Cappellari told physicsworld.com. “By analysing these spectra we could measure the red-shift, or the blue-shift, of each section,” he adds. If an area shows a red-shift, it is moving away from us; if it shows a blue-shift, it is coming towards us. If one limb of a galaxy is red-shifted and the opposite limb is blue-shifted then the galaxy must be rotating, and you can measure how fast.
More rotating discs
What surprised Cappellari and his colleagues was that 66% of the galaxies previously classified as elliptical were now shown by ATLAS3D to be fast-rotating discs. “Two-thirds of these galaxies are essentially no different from spirals that have had the gas and dust removed – they are ‘naked’ spirals,” Cappellari explains. “Such a large fraction is not something one can ignore; it brings a significant change to our understanding of galaxy formation,” he continues. This had led the team to make a distinction between non-spiral galaxies: the conventional ellipticals are “slow rotators” and the naked spirals are “fast rotators”.
The result is threatening to overturn more than 80 years of conventional wisdom. Astronomers currently classify galaxies using a “tuning fork” diagram constructed by Edwin Hubble in the mid-1920s. His fork has non-spiral galaxies forming the handle, with the two different flavours of spiral galaxies – those with and those without barred centres – constituting the prongs. However, Cappellari’s result shows that fast rotators may be more closely related to spirals than previously thought. “We feel our result could re-write the way textbooks on galaxy structure are written,” he says.
The iconic Hubble tuning fork image could be replaced by the ATLAS3D “comb”. The handle of the comb is formed by non-spiral galaxies in the order of their rotation speed, from slowest to fastest. The spiral galaxies then form three teeth, which attach to the handle at the end containing their fast-rotating, but naked, cousins. “In future when classifying galaxies in projects such as Galaxy Zoo, this is the picture that needs to be kept in mind,” explains Cappellari.
Extending the survey
“It is an important result,” says Karen Masters of the University of Portsmouth, UK, who uses data from Galaxy Zoo in her research. “Currently these fast rotators would likely be classified on Galaxy Zoo as non-disc galaxies because they are smooth and featureless. This research is showing that a large fraction of these galaxies do have a spinning disc, just one that can’t be seen from the image. You have to go in and look at the dynamics,” she told physicsworld.com. “It is a beautiful data set and it will be very interesting to see what happens if the survey is extended to include a larger sample,” she adds.
Cappellari is planning just that. “I am involved in a proposal to increase our sample size by a factor of 100,” he explains. But he is keen to stress that future results should not distract from these current findings. “No matter what we find in the future, we already have a major reinterpretation of the structure of local galaxies,” he says.
The findings are published in Monthly Notices of the Royal Astronomical Society and on the arXiv preprint server.
For more than 50 years it has been known that aircraft can punch large holes or carve out canals inside clouds as they pass through them – but no-one had been able to explain exactly why this happens. Now researchers in the US have identified the cause by comparing satellite images of clouds with the results of computer modelling. They say that the phenomenon could lead to extra precipitation in the vicinity of major airports.
Humans can encourage rain or snow by “seeding” clouds – dispersing small particles into the atmosphere. These particles act as nuclei onto which droplets of liquid water inside clouds can freeze, producing ice crystals that, when large enough, fall to Earth in the form of snow or rain. These droplets often exist in a “supercooled” state at temperatures as low as –38 °C and the introduction of particles into clouds raises this minimum temperature.
In the latest work, Andrew Heymsfield of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, and colleagues identify a similar process at work when aircraft pass through clouds. In this case it is not added particles that act as the crystallization nuclei but ice particles themselves. Droplets of liquid water within the clouds are frozen owing to the cooling effect of air being expanded behind propeller tips or over the wings of jet aircraft. Indeed, the former produces as much as 30 °C of cooling and the latter up to 20 °C. These ice crystals attract further droplets via evaporation, because less energy is needed to vaporize droplets over ice than is required over liquid water. Once in contact with the crystals the vapour condenses and freezes, causing the crystals to expand. The crystals then fall to Earth, creating a hole in the cloud.
Upwards and outwards
However, Heymsfield and his group believe that an additional mechanism was needed to extend this process throughout a cloud and generate a hole much larger than the aircraft itself. Their hypothesis is that the latent heat liberated when droplets freeze is enough to warm the air surrounding the ice crystals such that the crystallized region expands upwards and in so doing freezes the water droplets immediately above it. This process continues upwards until the crystallized region reaches the top of the cloud, at which point the cloudless air above forces the expanding crystallization sideways. When the crystals then fall to Earth as snow they leave a hole in the cloud that gets larger as the lateral expansion progresses.
To put the hypothesis to the test, the team used satellite images of a layer of cloud over Texas. The images, collected at 15-minute intervals over a period of four hours in January 2007, showed the growth and then disappearance of a number of holes and canals in the cloud. Canals are generated by aircraft passing through the cloud with a more horizontal trajectory.
The researchers then modelled the evolution of the cloud cover by inputting thermodynamic and wind data collected at Fort Worth in Texas into an appropriately-tuned version of the Weather Research and Forecasting model, which has been developed by NCAR, the National Oceanic and Atmospheric Administration and other American weather-related organizations. When they increased the concentration of ice crystals within the modelled clouds in line with the cooling effects expected from a passing aircraft they found that the resulting hole more than doubled in diameter, from 2 km to 4.4 km, between the 30th and the 90th minute of the simulation. This expansion was a good match to that seen in the satellite images and is therefore convincing evidence that the latent-heat hypothesis is correct, argues Heymsfield’s group.
Altering local weather
The group says that this phenomenon is unlikely to affect the global climate, but that it could alter the weather around northerly airports. The researchers used lidar and radar data to estimate the abundance of supercooled clouds within a 100 km radius of five major mid-latitude airports and two high-latitude ones. They calculated that an average propeller aircraft will seed clouds on about one of every 20 flights that it makes into or out of these airports, on the basis that they generate ice crystals at air temperatures below –10 °C, and that jet aircraft will do likewise once in every 40 flights, given that they cause crystallization only below –20 °C.
As such, says Heymsfield, increasing air traffic may mean that northerly airports will have to de-ice aircraft more often. He also says that weather data used by climate modellers that is collected at airports in the Arctic and Antarctic may have to be corrected to account for the influence of the air traffic.
Daniel Rosenfeld, an atmospheric scientist at the Hebrew University of Jerusalem in Israel, thinks that the latest research is important because, he says, the cloud-physics community had previously been “mystified” as to how aircraft could be creating “hole-punch” clouds. But he believes that the associated increase in snow or rainfall around airports is likely to be very small, particularly since most of the precipitation will evaporate before it reaches the Earth’s surface.
However, Bernhard Mayer of the Ludwig-Maximilians University in Munich believes that the effect could be significant, even if, as he points out, hole-punch clouds are rarely seen. “It could be that visible holes are just the ‘tip of the iceberg’,” he says, “and that aircraft affect clouds more often that one would infer from visual observations.”
July’s edition of Physics World will be celebrating all things relating to the science of invisibility. IOP members will be able to view this special issue tomorrow from MyIOP.org.
In one of the features, physicist and Hollywood adviser Sidney Perkowitz reflects on how invisible people and objects have captured the popular imagination for millennia. But we want to know your opinion on this topic. Which of the following science-fiction stories has the best use of invisibility as a plot device?
The Invisible Man
Star Trek
Predator
Harry Potter
Lord of the Rings
The Ring of the Nibelung
Heroes
Hollow Man
Go to the Physics World Facebook page to vote for one. And feel free to add a comment if your favourite book or film is not included.
Last week’s poll addressed the topic of particle physics. We asked our Facebook followers: If the LHC or the Tevatron fail to find the Higgs, should the world invest in a new machine to continue the search?
It seems that the majority of respondents are keen on big physics and the pursuit of fundamental answers because 78% voted yes.
We also had a number of comments from our fans. And interestingly, the majority of feedback came from our fans who voted no. Bengt Månsson who lives in Partille, Sweden, for instance, explains why he would not favour another expensive collider. “If LHC fails then it is rather probable that there is something wrong/unfinished with the theory.”
Marc Merlin, a fan based in Georgia in the US, did not vote either way. He did make some interesting points, however, that go right to the heart of the situation in the US where the Tevatron collider at Fermilab will be closed at the end of September. “This is a really tough question, and it can only be reasonably discussed in the context of budgets and competing science objectives,” he said. “The physics goals would be alluring, but what would pursuing a successor to the LHC mean in terms of other important research being deferred or even cancelled?”
We’re looking forward to some lively debates about your opinions relating to this week’s poll.
If there’s one issue dominating Australian politics right now, it’s the proposed tax on emissions of greenhouse gases.
It seems a genuine attempt to encourage Australia – probably the world’s highest per capita emitter of carbon dioxide – to slow down or halt its growth in emissions.
Unfortunately, the climate-change debate in Australia is lagging well behind that in the rest of the world, with the media giving way too much attention to those “climate sceptics” who remain unconvinced that rising levels of greenhouse gases in the atmosphere are changing the Earth’s climate.
We’re not talking about providing airspace to the relatively small band of genuine scientists who are questioning particular aspects of the scientific evidence for climate change, based on a thorough knowledge of the relevant research.
This is the man who, apart from claiming that global warming stopped in 2001, likened the Australian federal government’s chief climate-change adviser Ross Garnaut of the University of Melbourne to a Nazi for his views on global warming, a below-the-belt accusation for which he was forced to apologize earlier this week.
Yesterday, however, as I was nearing the end of my week-long fact-finding tour of Australian science, who else should be appearing in the Perth district than Monckton himself.
He was invited to deliver the Lang Hancock Lecture at Notre Dame University in Fremantle, just south of Perth on Thursday night. Unfortunately, the university press office declined a request to attend to the event that was put in by one of the other journalists on the tour I’m on.
Monckton’s visit had already caused a fair bit of noise, including a formal complaint from more than 50 Australian scientists, who called for the lecture to be cancelled. Despite the protests, the lecture went ahead as planned, as did a separate talk at the Association of Mining and Exploration Companies annual convention on Math Lessons for Climate-Crazed Lawmakers
To me there’s no point calling for Monckton’s views to be stifled, which only adds to his martyr status and makes it appear that climate scientists have something to hide and are too scared to see the topic out in the open.
What’s needed instead is a careful unpicking of his main points, such as those offered here
That was certainly the view taken a few years ago in the UK by the likes of former science adviser David King. It’s the path that Australia needs to go down too.
But the controversy has not reached the end of its course. Monckton is also due to speak on 4 July in the chemistry department at the University of Western Australia. However, UWA president Alan Robson, who I met for lunch today, has insisted that the talk was not endorsed by the university but that it had been organized by a local community group that merely chose to use the department as a venue.
Earlier this month a lot of column inches were devoted to the news that the Sun continues to behave in a peculiar manner – and that solar activity could be about to enter a period of extended calm. The story emerged after three groups of researchers presented independent studies at the annual meeting of the Solar Physics Division of the American Astronomical Society, which appear to support this theory. But are the new findings really that clear-cut and what implications do they have for the climate here on Earth? Physicsworld.com addresses some of the issues.
Why the recent interest in the Sun’s activities?
Solar physicists agree that the Sun has been acting strangely of late. It relates to apparent abnormalities in the solar cycle, an approximately 11-year period during which the Sun’s magnetic activity oscillates from low strength to high strength, then back again. When the Sun’s magnetic activity is low, during a solar minimum, its surface remains relatively quiet, which leads to fewer sunspots. Then, as magnetic activity begins to increase, the surface becomes more dynamic and the sunspot numbers begin to increase in the lead up to a solar maximum.
But following the last solar minimum in 2006, solar physicists were surprised to observe that sunspot numbers were unusually slow to pick up. This led some to suggest that the next solar maximum, due in 2013, could be late and weaker than usual. Some see this as a sign that solar magnetic activity is slowing down and the Sun may be about to head into a prolonged period of magnetic weakness. Some have speculated that a weakened Sun could offset some of the effects of man-made global warming, or even counteract it entirely.
What was presented at the recent AAS meeting in New Mexico?
In one paper, Frank Hill of the National Solar Observatory (NSO) and his colleagues argue that because a specific solar wind beneath the surface of the Sun has failed to appear during the present solar cycle it could signify that the next cycle will be delayed. Hill and his colleagues identified the wind flow – known as “torsional oscillation” – using data from the Global Oscillation Network Group (GONG). They believe that the migration of this flow from mid-latitudes to the equator is a precursor to new sunspot formation in each cycle. Because the wind is yet to appear during the present cycle, the researchers argue that the next cycle could be postponed to 2021 or 2022, or it may not happen at all.
In a second paper, Richard Altrock of the US Air Force Research Laboratory describes how a process known as the “rush to the poles” appears to be slowing down. This phenomenon describes how older magnetic activity is pushed to higher latitudes during new cycles as fresh magnetic activity emerges at about 70 degrees latitude. Altrock has observed, using data from NSO’s 40-cm coronagraphic telescope, that this rush has been more like a crawl during the present cycle. For this reason, he believes that we’ll see a very weak solar maximum in 2013 and if the rush to the poles fails to complete then it is not clear how the sun will respond.
In a final paper, Matt Penn and William Livingston of the National Solar Observatory, in Tucson, look more specifically at the nature of sunspots during the two most recent cycles. The magnetic field associated with sunspots is typically 2500–3500 Gauss, but Penn and Livingston believe that the field strength has been reducing of late. Using over 13 years of data collected at the McMath-Pierce Telescope at Kitt Peak in Arizona, the researchers found that the average field strength dropped by roughly 50 Gauss per year during the previous cycle and the trend has continued into the present one.
Has the Sun gone through quiet spells before?
Scientists have known about the solar cycle since the mid 18th century and they have been able to reconstruct solar cycles back to the beginning of the 17th century based on historic observations of sunspot numbers. (Some researchers have even attempted to catalogue earlier solar cycles based on indirect observations of Sun spots). The first thing to say is that although solar activity has consistently oscillated over an approximately 11-year period, the timings and characteristics of each cycle are far from exact and new cycles have been late on arrival in the past.
Solar physicists do agree, however, that there was a 70-year stretch beginning in 1645 when the Sun remained in an extended period of calm referred to as the Maunder minimum. This period coincided with the “Little Ice Age” during which parts of the world including Europe and North America, experienced colder winters and increased glaciation. There was another shorter minimum from about 1790 to 1830, known as the Dalton Minimum.
So could we be heading for another Little Ice Age?
There are many uncertainties surrounding this question. Firstly, as explained in the previous answer, it is far from clear whether the Sun is headed for another period of calm. Recent research in the UK, predicts an 8% chance that we will return to Maunder minimum conditions over the next 40 years, based on past behaviour of the Sun over the last 9000 years.
Secondly, there are still debates over the details of the Little Ice Age and the role played by the Maunder minimum. In Europe, there were considerably more cold winters in this interval, but they were not unrelentingly cold as they were in an ice age. Also, the Earth’s climate is evidently a highly complicated system, involving interconnected feedback systems, so it is difficult to disentangle causes and effects. For instance, several recent studies have suggested that solar-induced changes to the jet stream in the northern hemisphere may cause colder winters in Europe but this would be offset by milder winters in Greenland.
Finally, even if the Sun were to head into a quiet period, others argue that the reduction in solar irradiance on Earth would still be small compared with the heating caused by man-made global warming. Mike Lockwood, a researcher at the University of Reading, estimates that the change in climate radiative forcing since the Maunder minimum is about one tenth of the change caused by man-made trace greenhouse gases.