NASA has successfully launched a mission to measure carbon-dioxide (CO2) levels in the Earth’s atmosphere in unprecedented detail. The $465m Orbiting Carbon Observatory (OCO-2) was launched today at 09:56 GMT on a Delta 2 rocket from Vandenberg Air Force Base in California. OCO-2 will now be put in an orbit around the Earth at an altitude of 705 km, where its instruments will be calibrated before being put into full use.
OCO-2 is a reincarnation of the $270m Orbiting Carbon Observatory (OCO), which crashed in the Pacific Ocean near Antarctica on 24 February 2009 shortly after take-off, following a rocket malfunction. In late 2009 the US government decided to build an identical mission, which was scheduled for launch in February 2013.
However, when the $424m Glory satellite, which would have studied how the Sun and aerosols in our atmosphere affect the Earth’s climate, also failed in a similar manner to OCO after launch on 4 March 2011, NASA officials delayed the launch of OCO-2. NASA also decided to launch OCO-2 using a bigger Delta 2 rocket – rather than the Taurus rocket that was used for OCO and Glory’s launch – which added to the cost of launching the new mission.
“The launch of OCO-2 will not only allow many of us who worked on the original OCO mission to complete unfinished business, but to take the next step in an important journey to understanding our home planet,” OCO-2 project manager Ralph Basilio from NASA’s Jet Propulsion Laboratory in California told physicsworld.com.
Joining the A-train
OCO-2, which weighs 454 kg, is NASA’s first spacecraft that is dedicated to making space-based observations of atmospheric carbon dioxide. OCO-2 carries a single instrument that it will use to produce “concentration maps” of carbon sources and sink throughout the world. As sunlight is reflected from the Earth’s surface, gases such as CO2 and oxygen absorb this light at specific wavelengths. OCO-2 contains three spectrometers tuned to detect changes in the intensity of this absorption.
“OCO-2 will deliver on the promises made on the original OCO mission – to obtain space-based measurements of carbon dioxide with the precision, resolution and coverage to improve our understanding of the carbon cycle and climate-change process,” says Basilio.
OCO-2 now joins the “A-train”, a set of six Earth-observing satellites that are already in orbit. These include the CALIPSO and Cloudsat satellites looking at the levels of aerosols in the Earth’s atmosphere and monitoring cloud formation, which were both launched in April 2006.
Deborah Jin in her lab at JILA in Boulder, Colorado. (Courtesy: JILA)
The US physicist Deborah Jin has been awarded this year’s Isaac Newton Medal by the Institute of Physics for her ground-breaking work on ultracold atomic gases. Based at the JILA laboratory in Boulder, Colorado, Jin is honoured by the Institute – which publishes Physics World – for “pioneering the field of quantum-degenerate Fermi gases”. The Newton medal – the Institute’s most prestigious prize – has been awarded annually since 2008.
In 1999 Jin and her then PhD student Brian DeMarco were the first researchers to cool a gas of fermionic atoms so low that the effects of quantum degeneracy could be observed. This phenomenon underpins the properties of electrons in solid materials, and the ability to create and control ultracold “Fermi gases” has since provided important insights into superconductivity and other electronic effects in materials. Working with Cindy Regal and Markus Greiner at JILA, Jin later created the first fermionic condensate in 2003, by cooling a gas of potassium atoms to nanokelvin temperatures.
Outstanding, clever and creative
“Jin is an outstanding, clever, creative scientist,” says Ed Hinds of Imperial College London, who also works with ultracold atoms. “Her incredibly complex experiments have significantly advanced our understanding of the behaviour of electrons in materials.” Jin, 45, becomes the first woman to win the international award, which is given for “outstanding contributions to physics”. She joins John Pendry, Martin Rees, Leo Kadanoff, Edward Witten, Alan Guth and Anton Zeilinger, as the seventh winner of the £1000 prize.
Jin originally studied physics at Princeton University before doing experimental work on superconductors at the University of Chicago for her PhD. She joined JILA in 1995, where she has been a fellow since 2005. Jin, who has previously won a prestigious MacArthur Fellowship “genius grant”, wrote about her work on ultracold Fermi gases in an article in Physics World in April 2002 entitled “A Fermi gas of atoms”, which Institute members can access via MyIOP.
Gold awards
The Institute has also announced the winners of its 2014 Gold Medals. This year’s Dirac Medal goes to Tim Palmer, of the University of Oxford, for his development of weather and climate prediction systems. Giles Davies and Edmund Linfield, of the University of Leeds, share the Faraday Medal for their contributions to the physics of far-infrared and terahertz radiation, while the Glazebrook Medal goes to Gerhard Materlik, of University College London and the Diamond Light Source, for establishing Diamond and his work in X-ray diffraction physics. Finally, Michael Payne, of the University of Cambridge, has bagged the Swan Medal for “the development of computational techniques that have revolutionized materials design and facilitated the industrial application of quantum mechanical simulations”.
A full list of this year’s Institute of Physics award winners is available online.
The Bard’s world: Uncovering the interesting state of science during the time of Shakespeare. (Courtesy: iStock/GeorgiosArt)
Star-crossed science
The poet, playwright and actor William Shakespeare was a vigilant and keen observer of human nature, writing about love, war, politics and family dramas with nearly unparalleled insight. But at first glance, it seems that he was comparatively uninterested in science. Indeed, most accounts of the famed Bard suggest that he was uninspired by, or even ignorant of, the science that was flourishing at the peak of his career. In his book The Science of Shakespeare: a New Look at the Playwright’s Universe, author Dan Falk takes the opposite view, suggesting that Shakespeare may have been much more influenced by the discoveries taking place than other scholars have acknowledged. Certainly, the playwright lived in scientifically interesting times. Born in the same year as Galileo, 1564, he came of age in a world that was starting to pay attention to Copernicus’s paradigm-shifting book On the Revolutions of the Heavenly Spheres, which was printed in 1543. Amazingly, within his lifetime, he could have seen two supernovae with his naked eye; there have been no similarly bright ones since. But as Falk points out early in the book, although Shakespeare was a prolific writer, not much has survived in the form of personal diaries, letters or accounts to help us pin down what he believed in or found interesting – leaving us to infer his personal views from his fictional works as best we can. With that in mind, Falk examines the seemingly “scientific” references in Shakespeare’s writings, looking for clues in, for example, Helena’s speech on the retrograde motion of Mars in All’s Well That Ends Well and Anthony’s declaration that he would need a new heaven and a new Earth to measure his endless love for Cleopatra. Shakespeare fans will find such examples entertaining, and Falk’s book gives readers a clearer idea of what was considered as scientific “fact” in Shakespeare’s day, when the idea of such things was new and game-changing. Falk’s book suggests that Shakespeare’s works and the beginnings of science as we know it are irrefutably, if not distinctly, intertwined. If historical rhetoric does not appeal to you, you may find parts of the book tedious, but it should resonate with readers who have an interest in the history of science.
2014 Thomas Dunne Books £16.65hb 364pp
The way the world ends
Of the many ways in which our current civilization could end, a super-pandemic might be the best we can hope for. Unlike a nuclear war, an asteroid strike or catastrophic climate change – all of which would devastate the environment as well as the human population – a virulent pandemic would leave the world’s resources and even much of our civilization’s infrastructure more or less intact. The small number of survivors would thus find themselves in a sort of post-apocalyptic Eden, with plenty of material around to help them build a new civilization out of the ruins of the old. But would they know how to do it? Lewis Dartnell doubts it, and to remedy this, the University of Leicester astrobiologist has written what amounts to a guidebook for the reconstruction. The Knowledge: How to Rebuild Our World From Scratch is essentially a collection of interesting bits of information about agriculture, chemistry, materials physics, medicine and engineering, cleverly packaged into a survival manual. It’s an effective gimmick: primed with thoughts of post-pandemic rebuilding, one pays rather more attention to explanations of, for example, wood pyrolysis and water purification than might otherwise be the case. The structure of The Knowledge does eventually get rather repetitive (“x is useful for y; you can make it by doing z”), so it is more a book for dipping into than reading straight through. But at its best, The Knowledge will give you a new appreciation for the building blocks of modern civilization – coupled with a better understanding of just how fragile they are.
2014 Bodley Head/Penguin Press £20/$27.95hb 352pp
Powers of time
In their 1968 short film Powers of Ten, Ray and Charles Eames took viewers on a journey through space, travelling to the farthest reaches of the universe and focusing in on the tiniest of particles within the human body. The film, which was based on a book by the Dutch educator and pacifist Kees Boeke, is justly famous for its artistic vision and for the way that it illustrates the sheer scale of inner and outer space. But what if the Eameses had tried to illustrate the scale of time, rather than space? In their book Time in Powers of Ten, authors Gerard ’t Hooft and Stefan Vandoren set out to do just that. After discussing processes that happen on familiar timescales such as seconds, minutes and years, they progress through ever-longer chunks of time, in ever-increasing powers of 10, until they reach 1090 s – far longer than the universe has existed or is expected to exist in the future. At this point they loop back around to the smallest timescales, beginning with the Planck time of 5.44 × 10–44 s and moving up through the half-lives of fundamental particles (about 10–25 s) into the physically rich timescales of the femto-, pico- and nanosecond. This book lacks the design finesse of the Eameses, but the text is detailed and scientifically strong (as you would expect; ’t Hooft shared the 1999 Nobel Prize for Physics) and the English translation by ’t Hooft’s daughter, Saskia Eisberg-’t Hooft, is clear and natural.
A new method of separating nuclear isotopes that exploits the slight differences in their electronic energy levels has been developed by physicists in the US. The energy-efficient separator was used to create isotopically pure lithium-7, which is used in some nuclear reactors. The team is now developing the technology for a variety of isotopes used in science, engineering and medicine.
The only general method for separating isotopes is the calutron, which was invented during the Second World War to enrich uranium for the atomic bomb. A calutron is essentially a cyclotron that accelerates ions to extremely high energies while deflecting them using a magnetic field. Lighter isotopes of the same atom are deflected fractionally more than heavier isotopes, which allows them to be separated. However, the devices use an enormous amount of energy – up to a terajoule to produce a single gram of a pure isotope – making the process very expensive.
Specific processes have since been developed to isolate certain isotopes such as uranium, which is now enriched using gas centrifuges. The US closed its last large calutron in 1998, and for many isotopes the world now relies on devices in Russia that date back to the 1950s.
Shifting isotopes
In 2012 Mark Raizen and Bruce Klappauf at the University of Texas at Austin proposed an alternative to the calutron based on optical pumping, in which laser light changes the way an atom responds to a magnetic field (see “Isotope separation with a light touch”). Different isotopes of the same atom have slightly different electron energy levels: an effect called “isotope shift”. As a result, laser light of the right wavelength will cause an electronic transition in one specific isotope but not in the others. The final state of the isotope can be chosen so that the atom is deflected in a specific direction when it travels through a magnetic field, thus allowing the isotopes to be separated.
While the technique has already been demonstrated, the quantities produced were too small for industrial use. Now, the US-based team has built a machine that can produce large quantities of isotopes and has used it to isolate lithium-7, which is used by the nuclear industry. While naturally occurring lithium is mostly lithium-7, it also contains about 7.5% lithium-6. Lithium hydroxide is used as part of the anti-corrosion regimen in pressurized water nuclear reactors. There, it is exposed to neutrons, which encourage the lithium-6 to decay to a radioactive isotope of hydrogen that would be a serious hazard if it were to escape into the environment.
The separation technique begins with vaporizing lithium and then firing a 150 mW red laser at the vapour. This puts the lithium-6 atoms into an excited state – a process called “optical pumping” – while leaving the lithium-7 untouched. The vapour is then sent through a curved chamber lined on the outer edge with permanent magnets. The lithium-7 is repelled by the magnets and deflected out of the chamber where it is collected. Meanwhile, the lithium-6 is deflected onto the magnets and prevented from leaving the chamber.
Making medical isotopes
The result is 99.97% pure lithium-7 – which is good enough for use in a pressurized water reactor. Raizen believes that the energy cost of purifying a gram of lithium-7 would be “at least 250 times less than with the calutron and possibly as much as 1000 times less”. He has now started a non-profit foundation to develop industrial versions of the machine, mainly to produce medical isotopes.
Paolo de Natale of the European Laboratory for Non-linear Spectroscopy in Florence, Italy, says it is yet another example of how optical pumping, which was originally demonstrated in 1950, has shown itself to be useful for a real industrial process. He cautions that making the technique work for atoms other than lithium will not be a trivial task: for each new type of atom, researchers must find a suitable electronic transition and a suitable laser source. However, he adds that “Considering the tremendous progress in laser sources in recent years, it’s more or less always possible now to find the right laser sources with the right conditions.”
A trio of closely orbiting supermassive black holes has been spotted in a galaxy nearly 4.2 billion light-years away. The discovery was made by an international team of astronomers, which points out that such triple systems are very rare because most galaxies have just one black hole at their centre. This system is particularly interesting to astronomers because two of the three black holes are very closely bound, forming a “tight” binary pair within the system.
Astronomers know that supermassive black holes – the largest type of black hole, which can be billions of solar masses – lie at the heart of most galaxies, including our own Milky Way. Most galaxies are believed to evolve via collisions and mergers between smaller galaxies, so some of the larger galaxies should contain multiple supermassive black holes. Having two or more such gravitational powerhouses in a galaxy would have profound effects on its structure and dynamics. As a pair of supermassive black holes orbit one another, for example, the binary system’s gravity would disrupt the gas and stars at the centre of the host galaxy. This, in turn, could lead to a burst of star formation or even the ejection of one of the black holes from the galaxy.
Heavyweight triplets
To date, only a few galaxies with two supermassive black holes have been found, and just four triple black-hole systems are currently known. The closest known spacing between black holes in a binary system is 2.4 kiloparsecs – about 1/10th the diameter of the main disc of the Milky Way. The new system, detected by Roger Deane of the University of Cape Town, South Africa, and colleagues, consists of two supermassive black holes separated by a mere 140 parsecs, while the third of the trio is 7 kiloparsecs from the close-knit pair. The two black holes in the pair are orbiting one another at high speed – more than 100,000 m s–1.
The team made its discovery while studying six galaxies that were thought to host binary supermassive black-hole systems based on near-infrared and optical observations. The researchers found that one of the black holes was actually two, and hence that particular system is a triple. Because the astronomers did not have to search through many candidates to find the system, they believe that tightly knit binaries and indeed triple systems of black holes could be more common than previously thought.
Giant radio telescope
The team employed a technique known as very long baseline interferometry (VLBI) to study the trio. VLBI creates a giant radio telescope spanning thousands of kilometres across the globe by combining the signals from large radio antennas that can be separated by up to 10,000 km. This allows astronomers to see detail 50 times finer than that possible with the Hubble Space Telescope. The current observations were done with the European VLBI Network (EVN) and the data were correlated at the Joint Institute for VLBI in Europe (JIVE) in the Netherlands.
Deane told physicsworld.com that the discovery demonstrates the power of VLBI to differentiate between multiple objects in systems that are huge distances from Earth. Before the latest discovery, a pair of supermassive black holes with the closest orbit (about 7 parsecs apart) was spotted in a galaxy some 750 million light-years from Earth. “Our system is 4.2 billion light-years away, which is much more distant than the closest known pair, demonstrating that the VLBI technique can be used to probe close black-hole pairs across a fair fraction of cosmic time,” he says.
Spinning jets
The presence of the bound pair was also revealed via a much more prominent feature – the large-scale radio jets emanating from the black holes. Such astrophysical jets are a common feature of supermassive black holes – accreted matter collecting around the event horizon of the black hole is ejected along its axis of rotation as it tries to fall into the hole. The triple system has three such jets, and Deane and colleagues found that the presence of the tight pair is imprinted onto the properties of the jets. Indeed, the orbital motion of the black holes in the pair twists the jets into a helical or corkscrew-like “S” shape. This provides astronomers with a “smoking gun” for a binary black-hole system that could be used in future searches.
Deane also points out that this extreme triple system could be creating gravitational waves – ripples in the very fabric of space–time. Future telescopes, such as the Square Kilometre Array, should be able to detect these ripples for black holes that are even closer together. “It fills me with great excitement as this is just scratching the surface of a long list of discoveries that will be made possible with the Square Kilometre Array,” Deane says.
Five string banjo showing the position of the bridge on the round head. (CC BY-SA 3.0 / DMacks)
By Tushna Commissariat and Hamish Johnston
Folk and country music often blends the sharp twang of a banjo with the mellow and sustained tone of a guitar. While the two instruments appear to be very similar – at least at first glance – they have very different sounds. This has long puzzled some physicists, including Nobel laureate David Politzer, who may have just solved this acoustical mystery.
On a coastal plain in Østerild, north Denmark, a gargantuan white structure turns solemnly in the breeze. The latest wind turbine designed by the Danish manufacturer Vestas, the V164, is the biggest yet: at 220 m, it is well over twice the height of the Statue of Liberty. And when it was finally tested in Østerild at the beginning of 2014, it also proved to be the world’s most powerful – capable of generating 8 MW of power, enough to provide electricity for some 7500 homes.
The V164 is a symbol of the wind industry’s recent success. Over the past 14 years, the number of installed turbines across the world has risen dramatically, from an output of just 17 MW in 2000 to nearly 320,000 MW last year – corresponding to about 4% of the world’s total energy demand, according to the Global Wind Energy Council. The boom has been due partly to a surge in the construction of turbines in China, but many smaller countries are also adopting the technology. The UK, for example, generated 10% of its electricity from wind power last year, and it has more offshore wind capacity than the rest of the world combined.
Despite this success, however, the industry has sometimes struggled politically – not least because of a conflict between the cost and location of wind farms. Onshore wind power is relatively cheap: it costs about $87 per megawatt-hour, midway between natural gas ($66/MWh) and coal ($100/MWh), according to a 2013 report by the Energy Information Administration (an agency of the US Department of Energy). Plans for new onshore wind farms often face strong local opposition, however, which is why politicians frequently look offshore for new opportunities. But offshore wind is far more costly: the same 2013 report rates it as more expensive than nearly any other energy technology – renewable or otherwise – at about $222/MWh. The high cost of offshore wind was highlighted in March this year when Scottish and Southern Energy, a UK gas and electric company, announced that it would cut its investment in offshore turbines in order to assure a two-year price freeze for its customers.
The industry’s continued success, therefore, depends on finding ways to cut costs. One avenue that physicists and engineers are currently exploring uses lidar – essentially a laser version of radar – to improve the siting of wind farms and reduce maintenance costs. Lidar systems can measure the pattern and strength of wind at a distance, which gives wind-energy firms a better idea of how windy a certain location will be before they make the large capital investment required for a new wind farm. Better real-time information about how atmospheric conditions are changing could also make it possible to prepare turbines for outbreaks of turbulence, reducing the risk of expensive damage. The hope is that with a little help from light, wind power will become a more cost-effective technology.
New uses for old physics
There she blows A portable lidar unit can be moved across a wind farm to analyse wind patterns and improve turbine alignment. (Courtesy: ZephIR Lidar)
In the simplest form of lidar, laser light (typically infrared) is projected outwards and bounces back off whatever is in its path, including buildings, water, terrain or even – since the wavelength of light is so short – tiny particles in the atmosphere, such as dust and water droplets. The distance to such objects can be calculated from the time it takes for the light to return, or selected in advance by adjusting the laser beam’s focus. Meanwhile, the speed of the objects is calculated from the Doppler shift of the returning light: if the object is receding, the light will be red-shifted, while if it is approaching, the light will be blue-shifted.
In itself, lidar technology is not new. Its development dates back to the late 1960s, when defence experts and meteorologists became interested in using lasers to monitor wind patterns for aircraft landings and measure the distance and size of clouds. Since then, lidar has been used for applications as varied as mapping terrain and crime scenes; making digital models of cities; capturing tiny features of building façades for restoration; checking the speed of motorists; navigating autonomous vehicles; and estimating the concentration of atmospheric pollutants. It has even been used to measure the distance from the Earth to the Moon.
But although lidar is considered a mature technology, many of these well-established applications require lidar units that are far too complex for non-specialists to handle. “You’d need a truck load of PhDs to operate them,” says Mike Harris, chief scientist at ZephIR Lidar, a company based in Ledbury, UK. Another problem is that lidar units tend to be fragile, and are thus unsuitable for use over windy seas or terrain.
The source of both troubles, Harris explains, is that many lidar units have a very small tolerance for optical misalignment. This is especially true of lidars that measure Doppler shift, since they commonly compare the reflected light with a reference beam using interferometry. In this technique, small changes in the wavelength of the reflected light cause it to “beat” with the reference beam, and the timing of these beats reveals the precise extent of the Doppler shift. Since the wavelengths of light are on the order of a micron, the components inside the interferometer must be aligned to within a fraction of a micron in order to give an accurate measurement.
A solution to these problems began arriving in the 1990s, with the widespread adoption of optical fibres by the telecommunications industry. Suddenly, optics became more like electronics: laser light could be routed around enclosures compactly and with high precision, using the optical fibres like wires. And, crucially, fibre optics could suffer exterior movement without having to be re-aligned. “The requirements of the wind industry are pretty stringent,” says Harris. “You’ve got to sit the piece of kit out on the hillside with no maintenance for months on end. And you’ve got to get it there, so it’s got to be light.” Today, says Harris, the performance of lidars is not in question, although further improvements in their cost and reliability might make it more practical to mount them on large numbers of turbines.
Finding sites
Past efforts to reduce the cost of wind power have often focused on turbine technology. As a result, modern wind turbines have become incredibly efficient, capturing nearly 60% of the wind’s kinetic energy – close to the theoretical maximum as calculated in 1919 by the German physicist Albert Betz. Further cost reductions will require a different approach, and lidar technology could make a contribution in several ways.
One of these concerns the siting of wind farms – what the industry terms “resource assessment”. Evaluating a site’s suitability for a wind farm is a major expense for energy companies, requiring wind measurements at multiple points over an extended period, typically one and a half years. Only with such exhaustive measurements – supported by comparisons with historical meteorological data to check there are no anomalies – can a company persuade a bank to lend the initial capital. “It’s no good just putting a wet finger in the air and saying, ‘Yes, that’s windy enough’,” says Harris.
In the past, energy companies have performed such measurements with meteorological masts, commonly known as “met masts”. These masts support anemometers to measure wind speed directly, and must be built into the existing terrain on strong foundations. This doesn’t come cheap: a single offshore met mast, sunk into the seabed, costs around £15m. Performing siting measurements with lidar is inexpensive by comparison. ZephIR’s floating lidar units, for example, cost as little as £500,000 each and have the added benefit of being reusable, so they can be moved around a site or even taken from one site to another. The benefits of lidar for resource assessment were quantified earlier this year by the international renewable-energy consultancy Ecofys, which estimated that using lidar could lead to a total return on investment of as much as 14.5%, as opposed to 11.8% with met masts.
The other main application of lidar in the wind industry involves making measurements of atmospheric conditions up-wind from operating turbines and using these data to prepare for turbulence. If a turbine is subjected to turbulence it is unprepared for, its blades can suffer extremely high loads that, while not necessarily leading to sudden failure, can cause accumulated fatigue that costs millions to repair. “It’s a bit like driving over a pothole,” says Harris. “It knackers your suspension.” In 2008 a wind turbine near Hornslet in Denmark exploded, apparently because costly maintenance had not been carried out on its gearbox.
One idea is to build a lidar unit into the hub at the centre of a wind turbine, where it can analyse the pattern of incoming wind. If turbulence is known to be imminent, the pitch of the turbine blades can be altered so that they cause less wind resistance and, as a result, suffer a reduced load. This will lower the maintenance required for existing turbines, but Harris believes it could also enable turbines to be manufactured more cheaply, because they would not have to be “over-engineered” to prevent failures such as the one near Hornslet.
Making a contribution
Planning ahead Mounting a lidar unit on the turbine itself allows it to detect turbulence up-wind and prepare the blades in a less-resistant pitch. (Courtesy: ZephIR Lidar)
The idea of using lidar to prepare turbines for turbulence is attractive, but it is not the only solution. Some engineers are experimenting with integrated load sensors that quickly detect the onset of turbulence and adjust the blade pitch accordingly. Furthermore, some turbines are being developed with systems that control the pitch of individual blades, meaning that the response to turbulence could vary from blade to blade. This level of control is important because eddies can be very localized, and, under these circumstances, “having a lidar to look upstream is probably not going to help you”, says David Infield, a mathematician and expert in wind power at the University of Strathclyde, UK.
Infield is also unconvinced that lidar units offer a comprehensive alternative to met masts. In some cases, he argues, “there’s going to be issues over whether a company would want to leave an expensive lidar unit on site for a year or more”. Nonetheless, he believes lidar technology has some attractions for the wind industry. “The benefit of lidar is that it can give a much more complete description of the wind’s shear profile, and, especially on shore, you can deploy [lidar units] relatively straightforwardly,” he says. “At the moment, my understanding is that [energy companies] are usually using it to complement long-term measurements from masts.”
Lidar could also have uses for wind-power research. At Strathclyde, one of the world’s leading centres for such studies, scientists have been using lidar to understand the 3D pattern of turbine wakes, and how they affect the performance of other, downwind turbines, Infield says. Meanwhile, researchers at ZephIR are trying to attach lidar units to turbine blades, to understand how they are interacting with the wind in real time. Such investigations could ultimately lead to better-performing and more cost-effective wind farms, although Harris warns that there is a lot of uncertainty in the accuracy of lidar data, depending on the frequency of scanning and other details.
But regardless of where lidar is ultimately applied in the wind industry, it seems destined to lead to improvements. The pioneering work of ZephIR has already been honoured, with the UK secretary of state for business, Vince Cable, presenting the company last year with an Innovation Award from the Institute of Physics (IOP), which publishes Physics World. “ZephIR has addressed an ongoing problem for firms trying to bring affordable and reliable wind power to the grid,” said the IOP president Sir Peter Knight. “It is highly deserving of this award.”
Harris, however, prefers to remain modest about the potential of his company’s technology. “It’s certainly got a contribution to make,” he says. “Not on its own is it going to drag the wind industry from any current perceived problems – but it’s got a contribution to make.”
The Nanometer Structure Consortium (nmC) engages more than 200 scientists from three faculties to work on interdisciplinary nanoscience. Our scientific focus is on the materials science, physics, chemistry and safety of designed functional nanostructures, and their use in a wide range of applications including sustainable energy, optoelectronics and the life sciences. We have a particularly strong position internationally in semiconductor nanowires based on groups III and V of the periodic table.
Were you one of the first nanotechnology centres to be created?
The centre was founded in 1988 by Lars Samuelson, who led it until last year, with inspiration from the University of Glasgow and IBM Research in Zürich. In comparison, the US National Nanotechnology Initiative came along about 10 years later, so it is true that we were one of the first centres.
How has the group’s focus changed?
Throughout the past 25 years, the nmC has consistently emphasized materials science and quantum physics as its core competences. In the first decade, the scientific focus was on self-organized quantum dots and 2D electron gases. Then in around 2000 a conscious decision was made to pursue group III-V semiconductor nanowires, initially with expected applications in nanophysics research but also in electronics and optoelectronics. Over the past five years we have expanded our range of applications to include sustainable energy, neuroscience and other biomedical research. We also added a large nano-safety group that is working to understand the effects of nanowires and other nanoparticles on cells, organisms and the environment to ensure that we will be able to address any safety concerns early.
How can nanotechnology deliver cleaner energy?
Nanotechnology is about controlling and applying the new phenomena that occur at the nanoscale. In the case of nanowires, there are several such phenomena that are important for energy applications. First, they can be grown in ordered arrays, which allows them to act as photonic systems for more effective light harvesting in nanowire-based solar cells. Second, the small diameter of nanowires lets one combine different materials, even with drastically different lattice constants, into radial or axial heterostructures that may be used to create cheaper multi-junction solar cells and LEDs with higher efficiency and better colour rendering. Third, quantum-confinement effects can be used to tune the energy of photons emitted by nanowire LEDs and to enhance their thermoelectric power output. Finally, the interfaces formed in nanoscale materials can be used to control heat flow in thermal management or to suppress parasitic heat flow in thermoelectrics.
How will your PhD4Energy project help bring this about?
The project is part of the European Union’s Marie Curie Innovative Doctoral Programme and will run for four years. Worth 73.2m (about $4.46m), it will place 12 PhD students at the nmC to work on nanostructures for clean-energy applications. PhD4Energy is important for us because it adds critical mass and engages all parts of our very broad consortium of research groups, helping us to maintain a joint focus. The project ranges from applied research, such as nanowire-based solar cells, LEDs and thermoelectrics, to fundamental research on novel paradigms for nanoscale energy conversion, such as artificial molecular motors. Studies on the safety of nanowires are an integral part of the project.
What impact do you expect the project to have?
Scientific goals include studying cost-competitive, nanowire-based multi-junction solar cells, high-efficiency solid-state lighting and developing methods to increase the power output from nanoscale thermoelectrics. Each PhD student will also take up an internship with one of the eight firms that are associate partners of the project. In addition, we will collaborate closely with spin-off companies that commercialize nanowire-based devices from our environment. Specifically for LEDs, we have already created Glo and for solar cells we have Sol Voltaics.
What are the group’s long-term plans in energy research?
Using this project as a platform, we hope to build a lasting internship programme with many more participating Swedish and international companies. Scientifically, a long-term aim is to develop and then realize entirely new device paradigms that use nanophysics for efficient energy use or for energy conversion.
Has too much hope been placed on nanotechnology to solve society’s problems?
Our group has deliberately stayed away from unrealistic science-fiction dreams of what “nano” could bring – we won’t see true nanobots, for example. But the emerging approaches for drug delivery, diagnostics and therapy achieve things that were hard to imagine 25 years ago. Also, the way that information technology has changed our society has only been possible because of nanotechnology. So, I would say that nanotechnology is living up to expectations and will probably even exceed them for some time to come. The important thing is to keep investing in fundamental science along with applied science because real breakthroughs require the combination of both.
Of all the reasons given to mobile-phone manufacturers for broken handsets, top of the list are rainwater, toilets and washing machines. But perhaps not for much longer, thanks to nanocoatings that cause water droplets to simply roll off a surface as they might a plant leaf. UK firm P2i has recently installed numerous plasma-deposition machines in Motorola production lines to coat handsets with a tough, splash-proof polymer layer. More than 60 million electronic devices have already been treated using the technique and the company is about to release a “dunkable” coating that will keep a device functioning after being submerged for up to 30 minutes.
P2i, which was spun out from the UK’s Ministry of Defence a decade ago, developed a pulsed-plasma deposition process that applies an ultrathin polymer coating onto the internal and external surfaces of mobile devices. The invisible, Teflon-like layer dramatically lowers the surface energy of a material, which makes the water bead and allows “pretty much anything” to be coated with it in a matter of seconds, according to the firm’s chief technology officer Stephen Coulson. “It can be applied to any material, such as clothing, shoes and cardboard,” he says. “Such coatings also have low liquid retention, so you get less cross contamination between surfaces, and they could also be used for filtration and even to make fire-retardant surfaces.”
Non-wetting technology first garnered attention in the 1930s and 1940s when scientists started to understand how nature does it. Studies of duck feathers, for instance, revealed the crucial role of trapped air in keeping water off but it also became clear that the microscopic structure of a surface was vital too because it controls the angle that a droplet creates with it. A droplet landing on a textured surface, such as the lotus leaf, has a contact angle of up to 170°, making it almost spherical and classifying the leaf as “superhydrophobic”.
Numerous sprays and products have been engineered to mimic the lotus leaf’s behaviour, but our ability to characterize and fabricate structures at the micro- and nanoscale has led to an explosion in this subject over the last 20 years, says Kripa Varanasi of the Massachusetts Institute of Technology (MIT) in the US. Texture amplifies the intrinsic wetting properties of a material, allowing researchers to design even more extreme hydrophobic coatings, including those engineered to deal not just with static droplets, but impacting ones.
Last year, Varanasi and co-workers set a new record for superhydrophobicity by structuring materials including silicon, copper and aluminium with ridges that make droplets rebound more easily when they strike the surface. The patterns, which are similar to those found on butterfly wings and nasturtium leaves, minimize the contact time between the drop and the surface, and make the material 40% more hydrophobic than previously thought possible (Nature503 385). The principle could have important industrial applications in enhanced waterproofing, says Varanasi, for example to reduce the formation of ice on power lines and aero-engine turbines.
Tough challenge
The main hurdle in getting such coatings widely adopted is to ensure they are durable. The plasma technology used by P2i to coat mobile devices bonds the coating covalently to a material surface, which is hardy enough to survive the rigours of everyday use. But hydrophobic coatings that can withstand the much harsher environment of a steam turbine could deliver massive energy savings because up to 20% of losses in such machines come from tiny droplets settling on the blades and forming a thin film, explains Varanasi. His group is currently investigating the use of special ceramics and other covalently bonded coatings for commercialization in the energy industry, and is also targeting clathrate-proof surfaces for the oil and gas sector.
In 2012 the MIT group founded “Liquiglide” to commercialize coatings that allow 100% dispensing from containers by replacing the air pockets between structured surfaces with a lubricant. The technology, which is due to hit the market next year, generates a very thin Van der Waals film between a product and the substrate allowing consumers to extract every drop of ketchup or toothpaste from a container. “The beauty with this is that you don’t have to rely on polymers, you just need a lubricant that doesn’t dissolve in your product,” Varanasi told Physics World.
According to Coulson, who was a PhD student at Durham University in the UK when he invented P2i’s polymer-coating technology to provide soldiers with protective clothing, we are on the brink of a smart-coating revolution. Today, P2i has 62 patent families and is focusing on making electronic boards not only hydrophobic but electrically isolating, as well as developing protecting filters that repel water while allowing air to flow. “There are lots of liquid-based solutions out there, but most tend to shrink-wrap a product rather than bind to it,” says Coulson. “The key is how you apply coatings and make the process cost-effective.”
The unique electronic, optical, chemical and mechanical properties of 2D materials are creating a flurry of interest in laboratories around the world. Made up of individual atomic planes weakly held together by Van der Waals forces, these apparently simple systems behave very differently to their 3D counterparts and are therefore seen as a promising route for new electronic and other devices. The most widely studied 2D crystal is graphene: a planar sheet of carbon atoms arranged in a honeycomb lattice that is thinner and stronger than any other known material.
Since it was first isolated in 2004, graphene has continued to surprise. Some researchers believe that it might even become as important as silicon for the electronics industry. This is because electrons whizz through its 2D lattice at extremely high speeds, behaving like “Dirac” particles with no rest mass, which leads to extremely high conductivity. Graphene also shows great promise for photonics applications because it has an ideal internal quantum efficiency: almost every photon absorbed generates an electron–hole pair that could, in principle, be converted into electric current. And thanks to its Dirac electrons, graphene can also absorb light of any colour and has an extremely fast response, which could lead to much quicker optoelectronics devices for telecommunications.
However, graphene’s extreme conductivity is also a problem because the material remains conducting even when the power is switched off – wasting energy and preventing graphene components from being packed into computer chips as silicon components are today. There are other reasons why all is not plain sailing with this 2D wonder material. Although graphene is a semi-metal or “zero-gap” semiconductor, it is unlike familiar semiconductors such as silicon because it does not have an energy gap between its valence and conduction bands. Such a band gap allows a semiconductor to switch the flow of electrons on and off, which is the principle by which transistors operate. Researchers have proposed various schemes to overcome this problem, for example cutting graphene into nanoscale ribbons or chemically modifying the material to make it properly semiconducting, but such approaches damage the material and spoil its high electron mobility.
These drawbacks have turned researchers’ attention to 2D materials that naturally possess a band gap, such as transition-metal dichalcogenides (TMDCs), hexagonal boron nitride and layered oxides. These monolayer materials might even be combined with graphene to make novel hybrid heterostructures that have exceptional electronic and mechanical properties. “This new class of materials promises all: insulators, metals, semiconductors and superconductors,” says Sefaatin Tongay of Arizona State University in the US.
Semiconductor promise
TMDCs consist of a layer of transition-metal atoms sandwiched between two layers of chalcogen atoms, such as sulphur, selenium or tellurium, and they can be made using similar methods employed to obtain graphene. In bulk form, TMDCs are indirect band-gap semiconductors, but when scaled down to monolayers, the strong coupling between the neighbouring layers turns them into direct band-gap semiconductors. The material is therefore very efficient at absorbing and emitting light, and because TMDCs can be placed on a variety of substrates, they are ideal for optoelectronic devices such as LEDs and solar cells.
One much-studied TMDC, which is made from molybdenum and sulphur (MoS2), shows particular promise. But others discovered in the past few years include MoSe2, NbS2, ReSe2 and WSe2 (see table below). These materials could find similar applications as proposed for graphene or, if combined with graphene’s unique electronic and mechanical properties, could be used to make superior nanoelectronic circuits.
Earlier this year, however, Tongay and colleagues discovered a new 2D material called rhenium disulphide. Despite officially being a member of the semiconducting layered TMDC family, the material behaves as though it is a pure monolayer. Unlike other 2D materials though, it does not undergo an indirect-to-direct band-gap transition when scaled down to monolayers. The system therefore provides researchers with a 3D crystal in which they can study 2D phenomena without the difficulty of preparing large and high-quality monolayers.
Perhaps the next-best material to graphene in terms of mechanical and thermal properties is hexagonal boron nitride
Perhaps the next-best material to graphene in terms of mechanical and thermal properties is hexagonal boron nitride (hBN). Also known as “white graphene”, hBN is an ideal substrate for graphene because the two materials have very similar lattice constants. Unlike graphene, hBN is an insulator with a very large energy band gap, which means that monolayers of hBN integrated with graphene can be used as gate dielectrics and tunnel barriers with very few defects.
Indeed, hBN has particularly strong phonon resonances in the technologically important infrared band, which some physicists believe could be used to process information in nanodevices. “Flexible nanoelectronics could be the main application for the portfolio of 2D materials where graphene, TMDCs and hBN might be combined to make high-performance ultra-flexible transparent transistors on plastics and soft substrates,” says Deji Akinwande of the University of Texas at Austin.
Graphene derivative
Since graphene first rocked the materials world a decade ago, researchers have been exploring derivatives such as fluorographene, which is a wide-gap insulator made by fluorinating graphene. Similarly, the large band gaps in “graphane” and “graphone” (hydrogenated and semi-hydrogenated versions of graphene, respectively) could be used to make transistors with a large on–off current ratio, although researchers first need to find a way to prevent these materials from gradually losing their hydrogen atoms. Researchers are also exploring “graphynes”: 2D carbon allotropes built from double- and triply-bonded carbon atoms instead of just double bonds. These materials naturally contain conducting charge carriers and therefore could be made into semiconductors without the need for external doping.
According to some researchers, one of the most promising graphene-based derivatives is graphene oxide. This material is just like ordinary graphene but is covered with molecules such as hydroxyl groups or oxygen, which remove electronic states and turn the graphene into an insulator. Sheets of graphene oxide can easily be stacked on top of each other to form extremely thin but mechanically strong membranes, which could serve as molecular sieves. In addition to applications in water filtration and desalination, such systems might also be used for hydrogen storage, polymer solar cells, and flexible colour displays and smart textiles.
It is not yet clear whether graphene will live up to its promise and “win out” over other 2D materials, nor when the 2D-materials revolution will really start to affect our lives. According to Tongay, it is likely that there will be many winners in the race and that more competitors will appear relatively soon, such as phosphorene and silicene. “Graphene kick-started the 2D materials field and will remain an integral member of the 2D family, but the other 2D materials will bring new functionalities,” he says. “The 2D-materials revolution is here and I very much hope to see these materials integrated into our daily lives in different forms, from flexible electronics and solar cells to applications that we have not even dreamed of yet.”
The most promising 2D materials
Graphene family
Graphene – Extremely high conductivity and mechanical strength but no band gap in pristine state
Graphene oxide – Promising for molecular sieves, hydrogen storage and polymer solar cells
Graphane and graphone – Large on–off current ratio and large band gap but gradually lose hydrogen