Skip to main content

Graphene supercapacitor breaks storage record

Researchers in the US have made a graphene-based supercapacitor that can store as much energy per unit mass as nickel metal hydride batteries – but unlike batteries, it can be charged or discharged in just minutes or even seconds. The new device has a specific energy density of 85.6 Wh/kg at room temperature and 136 Wh/kg at 80 °C. These are the highest ever values for “electric double layer” supercapacitors based on carbon nanomaterials.

Supercapacitors, more accurately known as electric double-layer capacitors or electrochemical capacitors, can store much more charge than conventional capacitors. An important feature of supercapacitors is that there is an extremely narrow gap between the electrodes – which are ultrathin layers. This means that a large amount of electrical charge can be stored in a tiny volume.

The new device was made by Bor Jang of US-based Nanotek Instruments and colleagues. It has electrodes made of graphene mixed with 5wt% Super P (an acetylene black that acts as a conductive additive) and 10wt% PTFE binder. A sheet of carbon just one atom thick, graphene is a very good electrical conductor as well as being extremely strong and flexible.

The researchers coat the resulting slurry onto the surface of a current collector and assemble coin-sized capacitors in a glove box. The electrolyte-electrode interface is made of “Celguard-3501” and the electrolyte is a chemical called EMIMBF4.

Fast charging

We believe that this is truly a breakthrough in energy technology Bor Jang, Nanotek Instruments

The energy density values of the supercapacitor are comparable to that of nickel metal hydride batteries. “This new technology makes for an energy storage device that stores nearly as much energy as in a battery but which can be recharged in seconds or minutes,” Jang explained. “We believe that this is truly a breakthrough in energy technology.” The device might be used to recharge mobile phones, digital cameras and micro-EVs, he adds.

The team, which includes scientists from Angstron Materials in the US and Dalian University of Technology in China, are now working hard to further improve the energy density of the device. “Our goal is to make a supercapacitor that stores as much energy as the best lithium-ion batteries (for the same weight) but which can still be recharged in less than two minutes,” said Jang.

His team first discovered that graphene could be used as a supercapacitor electrode material in 2006. Since then, scientists around the world have made great strides in improving the specific capacitance of these electrodes but the devices still fall short of the theoretical capacitance values of 550 F/g.

“Despite the theoretically high specific surface area of single-layer graphene (which can reach up to 2.675 m2/g), a supercapacitance of 550 F/g has not been reached in a real device because the graphene sheets tend to re-stack together,” explained Jang. “We are trying to overcome this problem by developing a strategy that prevents the graphene sheets from sticking to each other face-to-face. This can be achieved if curved graphene sheets are used instead of flat ones.”

The work was reported in Nano Letters.

Galaxies pin down dark energy

A new way of measuring the geometry of the universe confirms that dark energy dominates the cosmos and bolsters the idea that this unusual form of energy is described by Einstein’s cosmological constant. The technique, developed by physicists in France, involves a relatively easy measurement of the orientation of distant pairs of galaxies.

Over the past decade or so, several kinds of observation, such as measurements of the distances of remote supernovae, have provided strong evidence that the expansion of the universe is accelerating. Cosmologists believe that this expansion is being driven by what is known as dark energy – a substance with negative pressure that opposes the pull of gravity. Unfortunately, however, they have little idea of what dark energy actually is, having been unable to measure its properties well enough to distinguish between rival hypotheses.

The new approach, devised by Christian Marinoni and Adeline Buzzi of the University of Provence in Marseille, should help narrow down the options as well as provide another means of working out the geometry of space. It involves comparing the known shape of very distant objects with the shape of those objects as revealed by astronomers’ observations. Astronomers don’t measure distances, and hence shapes, directly, but instead measure the extent to which the wavelength of radiation from a distant object has increased – or redshifted. This tells them the speed at which the object and Earth are moving apart.

Unusual geometry

Hubble’s law states that the speed at which objects within the universe move apart from one another is proportional to the distance between them, so knowing the speed of a distant object reveals how far away it is (although this is only approximately true at very great distances). But if the space between that object and the measurer has an unusual geometry or if the expansion of the universe is actually accelerating then the distance measured will not be accurate. So the idea is to vary the quantities that represent the geometry and the strength of dark energy until the distances of interest match up with expectations.

This principle was first proposed by the astronomers George Alcock and Bohdan Paczyński in 1979 but has been difficult to carry out in practice because the redshift due to the local motions of the objects themselves tends to mask that caused by the expansion of the universe. What Marinoni and Buzzi have done is to study a system for which the local motions can be filtered out in quite a straightforward way. They don’t measure a shape as such but instead the orientation of pairs of galaxies several billion light years from Earth that are in orbit around one another in binary systems. They reason that such galaxy pairs should be randomly oriented and so a large set of these binary systems should have an even distribution of orientations. Any deviation from that even distribution would reveal the influence of spatial geometry and dark energy, once the local effects have been removed.

To compare their technique against real observations they measured the orientations of galaxy pairs using data from the DEEP2 galaxy redshift survey and then used more local data from the Sloan Digital Sky Survey to calibrate the motion of the galaxies themselves. Their analysis agreed with the standard cosmological model regarding both the geometry of the universe and the abundance of dark energy – confirming that the universe is flat, in other words that it follows the ordinary laws of Euclidean geometry, and that dark energy makes up around 70% of the energy-matter content of the universe.

Cosmological constant is best bet

They also calculated a value for the strength of dark energy that suggests this substance comes in the form of the cosmological constant – a term that Einstein added to (and then removed from) his equations of general relativity. If correct, this means that the repulsive force is constant throughout the evolution of the universe and that it is mathematically is equivalent to the quantum-mechanical energy of the vacuum.

If you keep the technique simple you can avoid biases. Cosmology is a science where systematic errors are just behind the door Christian Marinoni, University of Provence

Marinoni argues that their technique represents a valuable additional approach to understanding dark energy, since, he says, it is “simple, transparent and faithful”. In particular, he says, it does not rest on any questionable physical assumptions. “If you keep the technique simple you can avoid biases,” he says. “Cosmology is a science where systematic errors are just behind the door.”

Alan Heavens of the University of Edinburgh, who wrote a commentary piece to accompany the paper, agrees that the new method is “nice and direct”. But he warns that it does contain an assumption that must be tested – that the orbital properties of local galaxy pairs are equal to those of galaxies from 7 billion years ago, when the light left the objects catalogued in the DEEP2 survey.

The research is described in Nature 468 539.

A fusion of fusion

By Matin Durrani

We’ve had fusion on our minds quite a lot here at Physics World in recent months.

First we published “Hot fusion” – a great feature by Steve Cowley, chief executive of the UK Atomic Energy Authority at the Culham Centre for Fusion Energy, in which he expresses his optimism that ITER – the huge international fusion experiment being built in France – “will achieve its goal of a burning plasma in the mid-2020s”.

While Cowley examined some of the technical challenges in building ITER, we then hooked up with Sir Christopher Llewellyn Smith who outlined ITER’s many political and financial challenges in a special video interview “ITER – a fusion facility worth building”.

Getting a huge multinational project like ITER off the ground is never easy and Sir Chris – who was chair of ITER council until last year – does a good job of making the case that ITER is a project worth pursuing, despite its price tag of €13bn (and rising). “We cannot afford not to develop fusion,” Sir Chris insists.

In a second video interview “Fusion: from here to reality”, I spoke to David Ward from the Culham Centre for Fusion Energy (CCFE) in the UK, who’s been involved in the fusion game for 25 years. He talked about some of the challenges in going from ITER to a working fusion plant, dubbed DEMO. What’s interesting is that he predicts not just one version of DEMO, but lots, with China and India potentially leading the way.

But fusion research, like all scientific research, would be nowhere without public funding and public support. In our third fusion video “A passion for fusion: nuclear research and science communication”, former CCFE researcher Melanie Windridge describes some of the challenges in communicating the excitement of fusion research to the public – and to school children in particular.

The Institute of Physics, which publishes physicsworld.com, selected Windridge as this year’s Schools Lecturer, a role that has seen her travelling the UK delivering an interactive lecture show about fusion energy to more than 13,000 school students between the ages of 14 and 16.

She talks passionately about her role as a science communicator and offers plenty of practical tips for researchers who want to communicate their work to a more general audience.

Summing up her 2010 lecture tour, Windridge believes that she is lucky with her area of expertise. “Fusion is inherently very interesting and energy is a very emotive subject, so it’s relevant to people’s lives,” she says.

Gongs away

When Andre Geim received a phone call from Sweden’s Nobel-prize committee last month, his first response was “Oh, shit”. Not that he was unhappy about sharing the Nobel Prize for Physics with his fellow graphene pioneer (and University of Manchester colleague) Konstantin Novoselov. Far from it. It was just that winning a Nobel prize is “a life-changing exercise”, as Geim told Swedish journalists shortly after the announcement was made.

But how much does winning a Nobel prize really affect a physicist’s career? In the case of Geim and Novoselov, the effect may indeed reach “oh, shit” proportions, for two reasons. One is that the Nobel is unquestionably the most prestigious prize in physics. In the words of Carlo Rubbia, who shared the prize in 1984 for discovering the W and Z bosons, winning it is “certainly not a small perturbation”. The other reason is that at the ages of 52 and 36, respectively, Geim and Novoselov probably have more career left ahead of them than most newly minted laureates; during the 2000s, the median age of physics laureates upon receiving the prize was just shy of 70, having crept upwards since the 1970s.

Of course, few physicists will ever find themselves packing their bags for Stockholm, or even come within a sniff of the prestigious Nobels. However, there are plenty of other prizes on offer, and thousands of physicists receive gongs of one kind or another every year. These range from named awards that carry significant amounts of prestige and cash to humble “best poster” certificates that come with warm congratulations and a textbook for a prize. For many, receiving these small awards will be a highlight of their careers.

Other fish in the sea

The Nobel prize is at the apex of the gong pyramid, but there is nevertheless a clutch of other international awards that can – and frequently do – claim runner-up status. Of these, the best established are the $100,000 Wolf prizes, which the privately run Wolf Foundation has awarded in most years since 1978. A well-regarded award in its own right, the Wolf Prize in Physics has gained some additional prestige in recent years thanks to its tendency to anticipate the Nobel: no less than 14 Wolf-prize winners have gone on to pick up Nobel medals in physics, five of them just one year later.

Other awards in the nearly Nobel camp include the Japan prize, which covers a broader range of subjects than either the Nobel or the Wolf prizes and is worth $450,000. The Japan prize counts Web pioneer Tim Berners-Lee and laser inventor Theodore Maiman among its awardees, and is sometimes called the “Nobel of the East”. However, it is now being challenged for that title by the $1m Shaw prizes, which were established in 2004 by the Hong Kong businessman and philanthropist Run Run Shaw to honour achievements in astronomy, life sciences and mathematics.

Indeed, the past decade has seen a boom in big-ticket science prizes, with the $0.5m Gruber Prize in Cosmology (first awarded in 2000), the $1m Kavli prizes (2005) and, most recently, the €0.4m BBVA Foundation awards (2009) weighing in alongside the Shaw and older honours. Unlike winners of big research grants, recipients of these awards are not required to spend the money on academic pursuits. Some, however, do so anyway. For example, Jerry Nelson, the California Institute of Technology astronomer who won the Kavli Prize for Astrophysics earlier this year, told Physics World that he had “basically distributed it to noteworthy colleagues and institutions”, after he had paid for several friends and family members to attend the prize ceremony in Oslo, Norway.

Financially, the many awards granted by learned societies (including the Institute of Physics, which publishes Physics World) will never rival those with links to deep-pocketed philanthropists. What they lack in monetary value, however, they often make up for in history and prestige. The Royal Society’s Copley Medal, for example, dates back to 1731 and counts luminaries such as William Herschel, Michael Faraday and J J Thomson among its recipients. Nelson, who won the American Astronomical Society’s Dannie Heineman prize in 1995, says that such awards were important to him because they “act as a reminder that what one is doing is of some interest and importance to others”, even if they are “financially minor” in comparison to a Kavli prize.

Early birds

Most awards given in the physics community are designed to recognize past achievements, not to facilitate future ones. The exceptions are the “early career” prizes. These are awarded to scientists at the beginning of their careers, and evidence suggests that they can have a tremendous impact on the individual concerned. “Probably the most important prize I got, in terms of getting me started, was fourth prize in the US Science Talent Search, which I won when I was in high school,” says Frank Wilczek of the Massachusetts Institute of Technology, who shared the 2004 Nobel Prize for Physics for his theoretical work on the strong interaction. “It showed me more of the big world and enhanced my self-confidence.”

Many early-career awards provide more tangible assistance as well. The MacArthur Fellowships in the US, for example, come with $0.5m in research funding over five years, and are designed to support talented researchers early in their careers. Other early-career awards, such as the UK’s Royal Society University Research Fellowships, make it easier for the recipient to find a permanent position by paying part of their salary for a few years, as well as providing start-up money for new projects.

The bottom line is that with or without a financial sweetener, early-career awards can be an important way of distinguishing a researcher from his or her peers. Faced with stiff competition for funding and permanent academic positions, those with a gong or two on their CVs are frequently at an advantage. “My previous awards and prizes helped a lot in terms of getting national and international recognition, and in promoting my career towards a chaired professorship,” agrees Jürgen Eckert, who last year was handed Germany’s highest research award, the 72.5m Leibniz prize, and is now director of the Leibniz Institute for Solid State and Materials Research in Dresden.

The cloudy lining

There is, of course, a down side to winning a prize. One common complaint among winners of major awards is that finding time to do research becomes much more difficult, since they are expected to give interviews and speeches, and are frequently asked to serve on committees as well. But there are also more subtle effects, as Rubbia points out. “Having the [Nobel] prize forces you to always do things that are right,” notes the former CERN director-general. “The capability of making a mistake, which is the driving force for scientific innovation – in the sense that you will only make progress if you make mistakes – is reduced.”

A separate concern, Rubbia continues, is that having a Nobel prize “is a responsibility, in that you have to take positions and have opinions on a much wider range of topics”. This can cause difficulties, he says, because “we have to realize that we are not experts on everything. The problem for most of my colleagues – and presumably me – is a tendency to become experts in areas that aren’t part of our expertise. So you have to exercise some modesty.”

The reverse problem of being pigeonholed as an expert in just one area is also a concern for some winners. Both Novoselov and Geim say they were already trying to “escape” research on graphene even before they won their Nobel for isolating it. That will almost certainly be more difficult now, although it is not without precedent. Only John Bardeen has ever won more than one Nobel Prize for Physics, but a number of laureates have made significant impacts in other fields. The best recent example is atomic physicist Steven Chu, who shared the prize in 1997 and is currently secretary of the US Department of Energy.

A more general issue with prize-giving is sometimes known as the “Matthew effect”, after a biblical quotation from the Gospel of Matthew that runs “For to all those who have, more will be given…but from those who have nothing, even what they have will be taken away.” It is certainly true that some physicists receive an enormous number of awards. The medals and certificates won by the late Abdus Salam, for example, fill an entire wall and part of a small room in the library of the institute he founded, the International Centre for Theoretical Physics. As a passionate advocate for science in the developing world as well as a prominent theorist, Salam was a special case, but it is hard to argue with the notion that some physicists get more gongs than they deserve. And, inevitably, the law of diminishing returns applies. “Recognition for the same work that got me the Nobel prize means less to me personally, at a psychological level, since in some sense it’s icing on the cake – not that there’s anything wrong with icing,” says Wilczek. However, he adds that he is “as gratified as ever” to see other parts of his work, or his body of work as a whole, properly appreciated.

But ultimately, Wilczek says, the most important thing that winners of any award can do is to keep a sense of perspective on what prizes are really about. “All prizes are nice to get, but most of them are, or should be, corollaries of achievements,” he says. “It’s the achievements themselves that are the core sources of pride and standing in the community.” Something to bear in mind next year, perhaps, when – yet again – a phone call from Sweden fails to interrupt your work or slumber.

Bosons bossed into Bose–Einstein condensate

Many physicists believed it could not be done, but now a team in Germany has created a Bose–Einstein condensate (BEC) from photons. BECs are formed when identical bosons – particles with integer spin – are cooled until all particles are in the same quantum state. This means that a BEC comprising tens of thousands of particles behaves as a single quantum particle.

The first BEC was made in 1995 by cooling a cloud of rubidium atoms to near absolute zero and today such condensates are routinely used to study a variety of quantum phenomena. However, few physicists had contemplated making a BEC from the most common boson in the universe – the photon. This is because photons are easily created or destroyed when they interact with other matter, which makes it very difficult to cool a fixed number of photons such that they form a condensate.

But now Martin Weitz and colleagues at the University of Bonn in Germany have devised a way of isolating and cooling photons. Although they cannot capture a fixed number of photons, the number fluctuates around a mean value, allowing the ensemble to be characterized using conventional BEC theory.

Trapped between two mirrors

The team trapped its photons between two concave mirrors that are separated by a maximum of 1.5 µm. This distance (to within an integer number) defines the maximum wavelength – or minimum energy – of a photon that is confined longitudinally within the cavity between the mirrors. The cavity is filled with a dye that is held at room temperature – and, crucially, the thermal energy of the dye is about 1% of the photon energy.

This large energy difference means that it is highly unlikely that additional photons will emerge from the dye, or that the dye will completely absorb a photon. Instead, the photons collide with the dye molecules, giving up or receiving small amounts of energy. These interactions cool the photons to room temperature – which is cold enough to create a photon BEC – while preserving the number of photons.

The team created the BEC by firing a laser into the cavity to fill it with photons. The laser was then kept on throughout the experiment to make up for photons that were lost at the mirrors and imperfections in the cavity. Some of the photons pass through one of the mirrors to a spectrometer, which measures the distribution of photon energies in the cavity. At low laser intensities the cavity contains a broad range of photon energies with a sharp cut-off at the cavity’s minimum energy.

Critical number of photons

When the laser intensity is increased, the number of photons in the cavity rises and the broad distribution endures until the photon number reaches about 60,000. Above this critical value, according to Weitz, the photon gas is dense enough for a BEC to form – much like a liquid drop condensing in a gas.

The team knows that the BEC has formed because a large peak in the photon energy spectrum emerges just above the cut-off energy. This peak corresponds to a large number of photons piling into the lowest energy state of the cavity. As the laser intensity is increased further, the number of photons in the BEC reaches millions.

To convince themselves that the peak is related to a BEC, rather than the cavity behaving like a laser, the researchers repeated the experiment at several different separation distances. They found that the peak always emerged at the same photon density – something that would not be seen in a laser, according to Weitz.

Small effective mass

The cavity has a planar design, which means that the photons are confined to two dimensions. As a result of the longitudinal confinement, they behave as if they are particles with an “effective mass” corresponding to the cut-off energy. This mass is still extremely small, which is why photons will form a BEC at room temperature and don’t need to be cooled to micro-Kelvin temperatures like atoms.

Interactions between the photons are much weaker than those between atoms and this means that photons can form a true 2D BEC. Atoms, on the other hand can only form a 3D BEC.

According to Weitz, creating thermalized light doesn’t necessarily require a laser and devices could be “pumped” by other light sources – including the Sun. As a result, he believes they could be used to shrink the size of solar cells by concentrating light within devices. He also believes BEC could be used to build sources of coherent light that don’t involve a laser.

The research is described in Nature 468 545 and, writing in the same issue of the journal, James Anglin of the Technical University of Kaiserslautern calls it “a landmark achievement”. He also points out that the experiment shows how “physics is the art of interchangeable” because in addition to showing the wave-like properties of atoms, BECs have now been used to show particle-like properties of light.

Robot geckos and flying snakes

GECKO.jpg


By Hamish Johnston

For millennia humans have dreamt of flight – inspired no doubt by nature’s denizens of the sky. Although people now fly routinely, even the most advanced flying machine seems clunky and amateurish compared to the elegance of a dragonfly or swallow.

Indeed, scientists and engineers are desperate to learn from nature and you can read about some of the results in a special issue of the journal Bioinspiration & Biomimetics that is devoted to flight.

The issue includes nine papers that investigate how flying snakes glide through the air, how hummingbirds hover and even how a gecko uses its tail to right itself while falling.

You can see the robotic gecko above.

The issue’s guest editors, David Lentink and Andrew Biewener, have written a nice overview entitled “Nature-inspired flight: beyond the leap”.

Neutrinos could detect secret fission reactors

Oil tankers fitted with neutrino detectors, hundreds of thousands of tonnes in mass, could be floated offshore to check for undeclared nuclear fission reactors. That’s the idea of physicists in France, who have proposed the Secret Neutrino Interactions Finder (SNIF) as a way of enforcing the nuclear non-proliferation treaty – although some experts doubt its feasibility.

Currently, fission reactors over the world are monitored by the United Nations’ International Atomic Energy Agency (IAEA), based in Vienna. The IAEA uses several “near-field” tools to make sure reactors are running legally, from CCTV-type cameras to metallic or fibre-optic networks that can detect when fuel is being loaded. In some cases, the agency installs thermal monitors to check that reactors are not being operated for too long, as might be required for the production of bomb-making plutonium.

Another, perhaps more fail-safe way to monitor reactors would be to detect the nearby levels of anti-neutrinos – light particles that are emitted copiously in nuclear-fission reactions. Because the flux of anti-neutrinos arriving at a certain area is proportional to the power of a reactor and its proximity, the anti-neutrino level at any point should be an indicator of what fission reactions are taking place nearby.

Neutrino oscillations

But, as researchers discovered almost a decade ago, the science is more complicated. Neutrinos have a small, finite mass – not zero, as was previously thought – and are able to oscillate from one type to another. This means that a detector looking for one type of anti-neutrino would always detect fewer than expected, because some of them oscillate into different types before arrival.

Thierry Lasserre at the French Alternative Energies and Atomic Energy Commission says that improvements in the understanding of neutrino oscillation have enabled his group to explore the use of anti-neutrino detectors for “far-field” reactor monitoring. Lasserre and his colleagues have calculated how anti-neutrino fluxes fall with distance from a reactor, taking into account oscillations. They have then analysed all the other sources of anti-neutrinos – 200 nuclear power stations over the globe – to produce a map of background anti-neutrino levels.

In a final calculation, Lasserre’s group showed that a neutrino detector would need to be sunk just 500 m or more underwater to prevent catching any cosmic rays, which would confuse the signal. The researchers think that, for monitoring fission reactions in a radius of 100–500 km, a detector would need a scintillator mass of 1034 free protons – in the order of a hundred thousand tonnes.

Friendly and clandestine activities

John Learned, a physicist at the University of Hawaii, US, who first suggested using neutrino detectors for global fission-reactor monitoring, believes the group has performed some “excellent” calculations, but notes that the SNIF idea is not totally new. He adds, however, “With a network of monitors one can record the activity of a group of reactors, perhaps some friendly ones, and some clandestine reactors. With various methods under development we can do a better job, even than indicated in this paper.”

Others are not so sure. Andrew Monteith of the IAEA’s Novel Technologies Unit says that the IAEA is at present only interested in neutrino detectors for near-field detection, because only that is within its current remit. “The far-field approach that’s discussed in the paper has never really been an official part of our thinking,” Monteith explains. “We’re taking it on a stage-by-stage basis, and the near-field one is certainly more realistic for us, in terms of cost and deployment.”

Expensive solution?

Julian Whichello, head of the Novel Technologies Unit, believes Lasserre’s SNIF detector could cost in the region of $100 million – almost the same as the IAEA’s entire budget for global verification of fission reactors. “This is something that’s well and truly outside of the current budget of the agency,” he says.

Still, Lasserre explains that his group’s goal was to explore the scientific possibilities rather than have political influence. “This is very futuristic,” he says. “It’s huge, it will cost a lot of money and it’s a difficult effort. Technically it would be possible in the next 30 years, but I’m not aware of any programme in the world to build such devices.”

The research is available at arXiv:1011.3850.

Europe extends key space missions

The European Space Agency (ESA) has announced it is to extend the lifetimes of seven key missions until 2014. The ESA-led probes, including the Planck microwave observatory and the Mars Express orbiter, will now take measurements for a further two years beyond 2012 – the previous end date for the missions.

The decision to operate the missions until 2014 was taken by ESA’s Science Programme Committee (SPC) at a meeting in Paris last week. Five other ESA-led missions will be extended, including the Cluster probe studying the Earth’s magnetosphere, the International Gamma-Ray Astrophysics Laboratory, the Venus Express orbiter, the X-ray observatory XMM-Newton and the Proba-2 satellite, which tests new types of space technology.

Waiting for the Sun

ESA’s contribution to four international projects, including the Cassini–Huygens mission to Saturn and the Hubble Space Telescope, will also continue until at least 2014. The other missions are Hinode, which was launched in 2006 by the Japanese Space Agency (JAXA), and NASA’s SOHO mission. The extension will allow the two probes to study the Sun during its next peak of magnetic activity, which is expected in 2013.

“It is a good day for European space science,” says David Southwood, ESA’s director of science and robotic exploration. “It is not an easy time to make such commitments but we should not doubt the wisdom of the SPC in squeezing even more return from the big investments of the past.”

Mapping the cosmos

ESA’s Planck probe, which was launched in April 2009, will map the cosmic microwave background (CMB) – a remnant of the Big Bang – in the finest detail yet. Planck carries two instruments: the high-frequency instrument (HFI) and the low-frequency instrument (LFI). The HFI will not operate beyond 2011, when it will run out of the liquid coolant needed to cool the instrument to a temperature of about 0.1 K.

However, the two-year extension will allow Planck to make further use of the LFI, which operates at about 20 K and measures the microwave sky with high sensitivity between 27 and 77 GHz. This will enable it to make better measurements of the CMB polarization. Nazzareno Mandolesi, principal investigator for the LFI told physicsworld.com that “This [extension] will improve the sensitivity of the LFI greatly, giving us the possibility to choose a region of the sky that we can more deeply observe.”

However, some researchers warn of the need to maintain a balance between keeping operational satellites going and spending the money to build new ones instead.”It is very difficult to argue against extending their operation to keep up the flow of excellent scientific data,” says Matt Griffin of the University of Wales, Cardiff, and principal investigator of the Spire instrument on ESA’s Herschel probe that was launched together with Planck. “Eventually though, some of them are going to have to be retired to make financial room for the next generation of missions.”

Flat pack LHC

ikea.jpg

By Hamish Johnston

If you need cheering up on a dreary Monday, this cartoon has been making the rounds on the blogs…

Coming to an Ikea near you, it’s the HÄDRÖNN CJÖLIDDER.

No, it’s not a hair metal band from the 80s, it’s a flat pack version of the Large Hadron Collider.

The cartoon is in the form of assembly instructions from the Swedish giant and includes the pitfalls of poor assembly (involving a black hole) and the reward for getting it right.

The panel on the right deals with the injection of protons!

You can see the entire cartoon here.

Plasmonic sensor detects viruses

The first biosensor made from plasmonic nanohole arrays has been unveiled by researchers in the US. The device, which exploits “extraordinary optical transmission”, can detect live viruses in a biological solution.

Recent years have seen a number of viral disease outbreaks, raising fears that such viruses could rapidly spread and turn into a pandemic. Controlling future epidemics will require rapid and sensitive diagnostic techniques capable of detecting low concentrations of viruses in biological solutions.

Plasmonics to the rescue

Plasmonics is a new branch of photonics that employs surface plasmon polaritons (SPPs), which arise from the interaction of light with collective oscillations of electrons at a metal’s surface.

The new sensor was made by Hatice Altug and colleagues at Boston University and exploits SPP resonances that occur in plasmonic nanohole arrays. These are arrays of tiny holes just 200–350 nm across and spaced 500–800 nm apart on very thin noble metal films, such as those made of gold.

At certain wavelengths, the nanohole arrays can transmit light much more strongly than expected for such a collection of apertures. This phenomenon is called extraordinary optical transmission (EOT) and it occurs thanks to SPP resonances.

Measuring red-shifts

The resonance wavelength of the EOT depends on the dielectric constant of the medium surrounding the plasmon sensor. As pathogens bind to the sensor surface, the refractive index of the medium increases, increasing the wavelength of the plasmonic resonance, explains Altug. This shift can then be measured to identify the presence of virus paricles.

Different viruses can be detected by attaching highly specific antiviral immunoglobulins to the sensor surface. Different immunoglobulins can capture different viruses from a sample solution (see figure).

The researchers have already used their device to detect pseudo viruses that look like highly lethal viruses, such as Ebola and smallpox. “Our platform could be easily adapted for point-of-care diagnostics that can detect a broad range of viral pathogens in resource-limited clinical settings, in defence and homeland security applications as well as in civilian settings such as airports,” said team leader Hatice Altug.

Simpler and better

Team member John Connor added that the technique has many advantages over conventional virus detection methods such as polymerase chain reaction (PCR) and cell culturing. Cell culturing is a highly specialized labour-intensive process and PCR, while robust and accurate, cannot detect new or highly divergent strains of viruses – unlike the new sensor.

And that’s not all. “The detection platform is also compatible with physiological solutions (such as blood or serum) and is not sensitive to changes in the ionic strengths of these solutions. It can reliably detect viruses at medically relevant concentrations,” added team member Ahmet Yanik.

Next on the list for the researchers, who are working with the United States Army Medical Research Institute for Infectious Diseases (USAMRIID), is to make a portable version of their platform using micofluidics.

The current work was published in Nano Letters.

Copyright © 2026 by IOP Publishing Ltd and individual contributors