Skip to main content

DNA helps turn graphene into a chemical sensor

A new chemical sensor based on just two materials, graphene and DNA, has been unveiled by researchers in the US. The device is simple, highly sensitive and easy to make and the scientists believe that it could be used to make an electronic “nose” capable of sensing a variety of molecules. Eventually, such sensors could be used in hospitals to detect disease, at security checkpoints to pinpoint dangerous chemicals and even by rescue teams to find lost people.

Like their biological counterparts, electronic noses are sensitive to a large number of different molecules. To achieve this, they usually consist of hundreds, or even thousands, of sensors on the same chip. Each sensor reacts to a specific molecule, just like the olfactory receptor proteins in mammal noses do. However, the need to fabricate thousands of different sensors – and the challenges of converting chemical reactions into electronic signals – can make electronic noses expensive and complicated devices.

Now, A T Charlie Johnson of the University of Pennsylvania and colleagues Ye Lu, Brett Goldsmith and Nick Kybert have come up with a simple way of sensing chemicals by showing that the electronic properties of DNA-coated graphene change in when exposed to certain molecules.

Begin with graphene transistors

Graphene is a sheet of carbon just one atom thick and the team based its devices on graphene transistors made using the standard “sticky tape” method, which involves exfoliating individual atomic layers of carbon from graphite. Next, the researchers thoroughly cleaned the graphene to remove any residue on the surface that can cause unwanted signals.

Each transistor was then soaked in a solution of a specific sequence of single-stranded DNA, which self-assembles into a pattern on the surface of the graphene. DNA is made from four different bases – adenine (A); cytosine (C), thymine (T); and guanine (G) – and an example of a sequence used is GAG TCT GTG GAG GAG GTA GTC. “We only tested a few sequences but the number of possible sequences is essentially endless,” explained Johnson.

The researchers selected their DNA sequences based on the ability of the sequence to work as a chemical sensitizing agent – a role very different from the function of DNA in living organisms. Each sequence behaves a little differently on the surface of graphene because it has a different shape, pH and hydrophilic properties. This means that every sequence interacts differently with different volatile organic chemicals (VOCs).

Change in resistance

When the DNA/graphene reacts with a chemical in its environment, the resistance of graphene changes. This change, which can be as large as 50%, can easily be measured using simple equipment. And, because this is a direct electronic measurement, it is very fast – complete responses can be seen in less than 10 seconds and the sensors recovers in about 30.

“By making an array of such DNA-graphene devices, we believe that we could exploit this property of DNA/graphene to detect explosives, chemical weapons (like nerve gas agents) or even toxic compounds that might be accidentally released at a plant,” Johnson told physicsworld.com.

“One of the great things about this research is that there is nothing really expensive about any of the sensor components, given the continual advances being made in graphene production,” said Johnson.

Putting dogs out of business

The team’s next big challenge is to scale up production of its sensors. “We need to test more DNA sequences, fit more devices on a chip and make sure we understand all the signals when a big array of sensors is exposed to a mixture of chemicals,” adds Johnson. “We have high hopes for these sensors but there are still lots of hurdles to overcome. Eventually, we would like to put dogs out of the chemical sensing business, and with proper development, sensors like ours might be able to do that.”

The research paper describing this work can be seen for free on arXiv. It has also just been published in Applied Physics Letters.

Quantum code breakers succeed

By Hamish Johnston

Over my summer holidays I read the spy-thriller Enigma by Robert Harris. It’s a rip-roaring novel about code breaking at Bletchley Park during the Second World War.

While reading, I couldn’t help wondering what the Bletchley boffins would have done if the Germans had used quantum cryptography to obscure their missives?

That’s why I was very interested to read a paper in Nature Photonics that offers a way to crack messages that are kept private using quantum key distribution (QKD).

QKD allows two parties (call them Alice and Bob) to exchange an encryption key, secure in the knowledge that the key hasn’t been read by an eavesdropper (called Eve).

This guarantee is possible because the key is transmitted in terms of quantum bits (qubits) of information. If intercepted and read by a third party, such qubits are changed irrevocably and this signals the presence of Eve to Alice and Bob.

One major problem facing the makers of any quantum information system is that no technology is “perfect” – and clever hackers could exploit differences between how a device actually operates and the performance assumed by system designers.

Many such “loopholes” have been identified and either fixed by changing the protocol – or have been shown to have a small chance of success.

But Vadim Makarov of the Norwegian University of Science and Technology and colleagues have discovered “how Eve can attack the systems with off-the-shelf components, obtaining a perfect copy of the raw key without leaving any trace of her presence”.

Chilling words…and how the “plug-and-play” system works seems like something out of science fiction.

As far as I can tell, a bright light is used to “blind” the highly-sensitive photon detectors used by Alice and Bob. This allows Eve to seize control of the dazzled correspondents’ QKD kit using a sequence of light pulses. Unaware that this is going on, Alice and Bob deliver the keys straight into Eve’s hands.

The systems in question are commercial QKD products made by ID Quantique and MagiQ Tecnologies. According to Marakov and colleagues, the MagiQ 5505 system has been discontinued and ID Quantique has implemented countermeasures.

Makarov works in the quantum hacking group at Trondheim – which I would like to think is a Nordic Bletchley Park!

You can read about the research at doi:10.1038/nphoton.2010.214.

Pushy hydrogen boosts molecular microscopy

When physicists in Germany discovered a simple way of using a scanning tunnelling microscope (STM) to take images of molecules at the atomic scale for the first time, the technique looked set to make these instruments much more useful for studying molecular structure. But before the method could be used with confidence by the wider scientific community, the researchers involved needed to solve the mystery of why it seemed to work so well.

Now, two years on, the physicists – based at the Forschungszentrum Jülich and at the University of Osnabrück – have shown that the improvement is caused by Pauli repulsion. This is a short-range force that arises from the fact that two or more electrons cannot occupy the same quantum mechanical state.

Hydrogen on the tip

The technique, developed by Stefan Tautz and colleagues, is called scanning-tunnelling hydrogen microscopy (STHM) and involves placing a single molecule of hydrogen right at the metal tip of a conventional STM – something that is easily done by cooling the tip to about 10 K and exposing it to hydrogen gas.

The tip is then brought into contact with the molecule of interest, which is fixed to a metal surface. A small voltage applied between tip and sample causes an electrical current to flow between the two. The tip is raster-scanned across the sample to measure the current as a function of position, creating an image of the molecule.

Pauli repulsion

In conventional STM, the current depends only on the molecule’s valence electrons, which gives little insight into the structure of the molecule. But with hydrogen on the tip, the STM can map the molecule’s total electron density (TED) – which is essentially the molecular structure.

In this latest work, Tautz and colleagues show that if the tip is maintained at a constant height and scanned over the sample, Pauli repulsion between electrons in the hydrogen and electrons in the molecule tends to push the hydrogen into the metal tip. When the tip is over areas of high electron density, the hydrogen is pushed further than when the tip is over regions of low electron density.

When the hydrogen is pushed into the metal tip, conduction electrons are forced away from the tip – another consequence of Pauli repulsion. The result is a drop in the current flowing between the hydrogen and tip.

Seeing intermolecular bonds

In their latest study the researchers looked at the hydrocarbon molecule PTCDA, which forms a herringbone pattern on the surface of gold. As well as imaging individual PTCDA molecules, the team were able to see the very weak bonds between the molecules for the first time.

“The hydrogen molecule is both sensor and signal transducer”, explains Tautz, who adds that the success of the technique also relies on the fact that there is no chemical bonding between the hydrogen and the tip. In other words, the technique could also work using noble gas atoms such as helium and neon.

Now that the physics underlying STHM is understood, Tautz believes that it could be used to identify study and identify complex molecules that have never been seen before. However, one drawback with the technique is that it only works on “flat” molecules that can be attached to a substrate.

Beautifully simple

Despite that problem, STM expert Markus Ternes from the Max Planck Institute for Solid State Research in Stuttgart, Germany, described the findings as “fantastic”, adding that it puts the technique on “solid ground” and will motivate researchers to us STHM as an analytical tool. Ternes, who was not involved in the work, told physicsworld.com that STHM is “beautiful because of its simplicity”.

The work is reported in Physical Review Letters.

Higgs, Higgs, glorious Higgs

By Hamish Johnston

“What actually goes on
when hunting a boson.”

If that’s the sort of rhyming couplet that tickles your fancy, you will love this music video from CERN.

It features the CERN choir performing “The particle physicists’ song”, a variation on the Flanders and Swann classic “The hippopotamus song”. The new words are by Danuta Orlowska, who is a clinical psychologist in London.

Other memorable lines include:

“They all thought of SUSY with love in their eyes”

and “Those physics professors were no idle guessers”.

And if you think you can do better than that, the choir suggests you e-mail your own verses to cern.song@hotmail.co.uk

If particle physics isn’t your bag and you’d prefer a song about nuclear power, then check out this reworking of “Yankee doodle dandy” from the American Nuclear Society. It’s from 2002 but a bit of a gem.

An aurora in the laboratory

planeterellasmall.jpg
The planeterrella in action. Courtesy: Guillaume Gronoff

By Margaret Harris

Earlier this month, lucky observers in the northern reaches of Europe and North America saw an unusually big burst of aurora activity after a large coronal mass ejection from the Sun collided with the Earth’s magnetic field, sparking a geomagnetic storm on 3–5 August. But the glowing plasma shown here does not come from the Sun; instead, it is produced by a tabletop device called a “planeterrella” – the round object in the middle of the photo.

Inspired by the early 20th century Norwegian physicist Kristian Birkeland, who used a similar device to explain the aurora borealis, or Northern Lights, the modern planeterrella is the brainchild of Jean Lilensten of the Laboratoire de Planetologie de Grenoble, France. You can read a bit more about how it’s constructed here.

In July, Lilensten won the first Europlanet Prize for Excellence in Public Engagement with Planetary Science for developing planeterrellas that can be used to demonstrate the workings of planetary aurorae for members of the public. You can see him and his prizewinning device a bit better in the photo below, which was taken by Cyril Simon.

planeterellagroupsmall.JPG

CERN faces €250m budget cuts

The CERN particle-physics lab near Geneva is to cut around CHF330m (€250m) from its budget for 2011–2015. The cut, which was announced by CERN boss Rolf-Dieter Heuer yesterday, will require the lab to scale back research into future particle accelerators. However, Heuer insists that the reduction will not affect the operation of the Large Hadron Collider (LHC) or force CERN to lose any of the 2000 or so staff it currently employs. CERN’s council is expected to meet on 16 September to approve the new plan.

The €250m cut is most likely to hit future upgrades and accelerators, which will now “proceed at a slower pace”. Also cut in the new budget – dubbed the medium-term plan – is the operation of CERN’s accelerators during the planned year-long shutdown of the LHC in 2012 when it will then prepare the LHC to go straight to maximum-energy 14 TeV collisions. A few accelerators were planned to be used during the shutdown period to study new detector techniques, but under the new plan all of CERN’s accelerators will now not operate in 2012.

“All our member states are making significant budget cuts at the national level, and it is difficult to argue why intergovernmental organizations such as CERN should be exempt,” says Heuer in a memo to staff. “I firmly believe that basic science budgets must be protected even in, and perhaps particularly in, times of economic downturn. But as a publicly funded body, we have to be realistic.”

Future plans

Worst hit could be work on the Compact Linear Collider (CLIC) – CERN’s own blueprint for a future electron–positron collider – that could be built once the LHC reaches the end of its life. Although research on CLIC and a “higher-energy proton machine” will continue, CERN’s contribution to CLIC will be held at around €16m and not be increased as was previously proposed. “In the present financial and political climate, I think it was inevitable that CLIC would be among the programmes to suffer,” particle theorist John Ellis told physicsworld.com.

Ellis told physicsworld.com that resources already made available by CERN will, however, allow an upgrade to the CLIC test facility to go ahead. But the budget cut means that an engineering demonstration facility called CLIC0, which would have to be built before CLIC could be approved, will not now go ahead unless external funds are sought. CLIC0 is supposed to demonstrate beam acceleration to around 6.5 GeV.

Ellis notes that the recent decision to open membership to CERN to countries outside Europe could mean that the extra funds are instead provided by these nations.

Belt tightening

“The cuts at CERN are very depressing news,” says Tim Gershon, a particle physicist from Warwick University in the UK who works on the LHCb experiment at CERN. “Although CERN’s management has succeeded to find a way to make the savings without any permanent scientific loss, the productivity of the laboratory will be significantly slowed.”

Others, however, are taking the news as an expected consequence of countries around Europe tightening their belts. “In the current financial climate these cuts are not unexpected and while they will slow down some of the longer-term projects they will not put in jeopardy any of CERN’s scientific objectives,” says Mark Lancaster, a particle physicist from University College London who works on the Compact Muon Solenoid detector at the LHC.

Supermassive black holes spawned by galactic merger

Lurking at the centre of nearly every galaxy and gobbling up stars in their vicinity, supermassive black holes are a truly menacing feature of the universe. Now, an international team of astronomers claims to have solved the mystery of how legions of these galactic monsters were born during the early history of the universe.

Supermassive black holes (SMBH) are thousands or even millions of times more massive than our Sun. We know that they exist from the impact they have on their surroundings: causing nearby stars to orbit galactic centres at breakneck speeds, for instance. Once SMBHs reach a critical size they can transform into quasars, which are extremely bright objects as small as a star but as luminous as an entire galaxy. But the relative abundance of quasars in the first billion years of the universe has puzzled astrophysicists.

This is because the “seed” for a blackhole is believed to take at least 108 years to form and then several more billion years to grow into a SMBH, followed in some cases by quasars. This was based on the assumption that SMBHs form in a similar way to stellar-mass black holes, marking the final phase in the lifespan of massive stars that have exhausted all their fuel for fusion.

Galaxies merging

Lucio Mayer at the University of Zurich, working with colleagues in Chile and the US, now offers an alternative exploration for how these SMBHs formed. The group proposes that the right conditions for black hole formation could have been created by the merging of two or more galaxies during their primordial stages when they were still emerging from vast clouds of dust.

Our result shows that big structures – both galaxies and massive black holes – build up quickly in the history of the universe Stelios Kazantzidis

Using computer simulations, involving more than 3 million computing hours, Mayer’s team found that when two young galaxies come together it can cause dust to spiral rapidly towards to a confluence at the centre. For galaxies above a critical size, more than 100 million solar masses of dust can be channelled towards the centre within just 100,000 years, creating a dense cloud in the centre.

“The high concentration of gas at the disk-like nuclei of the interacting galaxies causes tidal forces that cause the gas to effectively lose angular momentum and spiral to the centre,” explains Mayer. Shortly after this, the core of the cloud collapsed to form the seed of a black hole, and after 108 years the supermassive blackhole had grown to a billion solar masses.

Contradicts prevailing wisdom

If the findings are accepted by the community, they will turn around the prevailing wisdom among astronomers that galaxies evolved hierarchically – that is, gravity drew small bits of matter together first, and those small bits gradually came together to form larger structures.

“Our result shows that big structures – both galaxies and massive black holes – build up quickly in the history of the universe,” says Stelios Kazantzidis, another member of the team based at Ohio State University. “[The findings] add a new milestone to the important realization of how structure forms in the universe.”

The model does not, however, explain how smaller galaxies such as our own have evolved to contain an SMBH at their centre. In the case of the Milky Way, Mayer speculates that a similar gas-consuming process could have occurred later in its history when it had reached a critical mass after three to four billion years.

Further testing required

Andrew Jaffe, an astrophysicist at Imperial College London agrees that it would be useful to extend this model to cover a wider range of galaxy types. “As a further test of their modelling, it would be very nice to see whether they reproduce the dynamics of merging galaxies in other, more well-studied, situations – such as mergers of modern-day galaxies.”

The research may aid astronomers who are searching the skies for gravitatational waves, which would provide direct evidence of general relativity. According to Einstein’s theory, any ancient galaxy mergers would have created massive gravitational waves – ripples in the space–time continuum – the remnants of which should still be visible today.

Over the coming decade, several space-based missions have been planned to search for these elusive phenomena using interferometry equipment. “As the authors correctly point out, the way their blackholes form from direct collapsing has a profound impact on the gravitational wave signal expected in missions such as LISA,” says Francesco Haardt an astronomer at the University of Insubria.

The Laser Interferometer Space Antenna (LISA) is a joint mission between NASA and the European Space Agency scheduled for launch in 2025.

This research is described in Nature.

Pakistan flood disaster imaged by NASA satellite

27117_2.JPG
Courtesy: NASA/GSFC/LaRC/JPL, MISR team

By James Dacey

The flooding in Pakistan triggered by heavy monsoon rains at the end of July has killed more than 1200 people and affected more than 15 million others across the country, according to government estimates. The country’s president, Asif Ali Zardari, said yesterday that it will take up to three years to recover from the natural disaster, as quoted by the Associated Press.

This pair of images shows the extent of the floodwaters within the central and southern parts of Pakistan, where the image on the left is from 8 August 2009 and the one on the right is from 11 August 2010. They have been captured by the Nadir (vertical viewing) camera on the Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard NASA’s Terra spacecraft, which was launched in 1999.

The Indus River can be seen meandering across the image from upper right to lower left. But in the later view, flooding can be seen clearly in much of the surrounding region, particularly in the Larkana District to the west of the river. Each image is 300 × 425 km and false colours have been employed to enhance the contrast: water appearing in shades of blue and cyan; vegetation as red; clouds as white; and sediment as tan.

Since this image was captured, the floodwaters have spread further south into the Sindh province, which lies in the bottom half of this image. The United Nations warned yesterday of thousands more imminent evacuations, along with the threat of waterborne diseases, food shortages and lack of shelter.

Solar system older than we thought

 

The solar system is up two million years older than previously thought, according to a pair of researchers in the US. Their work, which is based on dating a meteorite found nestled in the Sahara desert, also provides clues about the birth of the solar system, lending weight to the theory that a nearby supernova explosion triggered its formation

Most meteorites, other than the ones known to come from the Moon or Mars, are relics from the formation of the solar system. This latest example, labelled “Northwest Africa 2364”, has a mass of 1.5 kg and was purchased by a private dealer from a local in Morocco in 2004. Part of it ended up in the hands of Audrey Bouvier at Arizona State University in the US who found it to be 4568.2 million years old – the oldest solar system object ever discovered, and 0.3–1.9 million years older than the previously accepted age of the solar system.

Bouvier, along with co-author Meenakshi Wadhwa, also at Arizona State, analysed several radioisotope chains associated with elements found in the sample. Using the decays of 238U–206Pb and 235U–207Pb, which have half-lives of ~4.47 Gyr and ~704 Myr respectively, the pair were able to pin down the age of the space rock. “Radiogenic decay is at a constant rate over the aeons, which can be measured by physically counting particles,” explained Bouvier. This age was fine-tuned using the decay of 26Al–26Mg, which has a much shorter half life of ~0.73 Myr.

In the beginning, there was iron

The discovery of Northwest Africa 2364 is also helping to firm up scientists’ understanding of how our local neighbourhood formed. Conventional theory used to suggest that the Sun and its family of planets formed in near isolation, far away from other stars. However, in the last five years researchers have begun to suggest that this might not be the case due to high amounts of daughter isotopes from the decay of 60Fe found in previous meteorite samples. 60Fe can only be formed in the core-burning stage at the end of a star’s life before it goes supernova. This means that if 60Fe was present in the early solar system it had to be seeded there by a nearby supernova and the Sun couldn’t have formed in isolation after all. Bouvier and Wadhwa’s result only enhances this possibility.

“This research pushes back the start of the solar system by around a million years, so if you work backwards using radioactive decay there must have been a higher concentration of 60Fe at the beginning than previously thought, by about a factor of 2,” explained Jamie Gilmour, who researches solar system formation at the University of Manchester. “This makes it more necessary for a supernova to have seeded the solar system with it,” he added.

The findings are described in Nature Geoscience.

Antenna directs light at the nanoscale

 

Nanotechnology offers the promise of a new wave of sensors and optical components, but the tiny sizes involved can make it difficult for users to exchange information with these devices. Now, researchers in Spain have demonstrated a novel solution to this problem that involves fixing an “antenna” to nanoscale objects that can send and receive optical data with high precision.

physicsworld.com first reported this idea earlier in the year when researchers in Japan announced that they had created a nanoscale version of the famous “Yagi Uda” antenna. This device was invented in the 1920s to overcome signal degradation, which caused radio signals to lose quality over distance. It was used by the British with radar during the Second World War and went on to become the standard antenna for transmitting and receiving television signals.

Key to the classic design is its “parasitic elements”, made from strips of electrical conductors. These elements induce currents in the presence of a radio signal, which, in turn, generate secondary radio signals that can be transmitted in the same direction as the original signal. The same principle works in reverse so the antenna can boost a signal when receiving information.

Honey I shrunk the antenna

In the nanoscale version of the Yagi Uda antenna, the conducting strips are replaced by an array of gold nanorods. The nanorods are aligned in such a way that incoming light manages to trigger plasmons in the gold surface – that is, collective wavelike motions of billions of electrons – to resonate and emit secondary light in the same direction. The researchers in Japan argued that their device could lead to new sensors – providing that it could be coupled to light-emitting particles.

This feat has now been achieved by Niek van Hulst and colleagues at the Institute for Photonic Sciences (ICFO) in Barcelona together with researchers at the Catalan Institute for Research and Advanced Studies (ICREA). They fabricated a number of nanoscale Yagi Uda antennas containing the tiny parasitic elements made from gold using lithography to etch the devices onto a glass substrate. The total length of individual antennas was 830 nm where individual feeds were just 145 nm, each separated by 175 nm.

To integrate the antennas with particles, Van Hulst’s team then used lithography a second time to decorate the substrate with quantum dots – nanosized pieces of semiconductor in which electrons (or holes) are confined in 3D such that their electronic properties can be controlled by changing the size of the dots. By positioning the quantum dots close to the gold feed elements, the researchers were able to couple the quantum dots with the near field of the nanoantenna.

A narrow angular cone

With this configuration, Van Hulst’s team was able to show that light emitted from the quantum dots, in the form of luminescence spectra, was being transmitted by the Yagi Uda antennas in a narrow angular cone. “The direction of the interaction between light and matter can now be controlled in an asymmetric way,” says Alberto Curto, a member of the Barcelona-based team. “This step forward in the field of nano-optics has potential applications in quantum optical technologies and the detection of minute amounts of chemicals, for example.”

The researchers also show that it is important to tune the system by creating parasitic elements that match the luminescence spectrum. “We fabricated various antennae of different dimensions and show that resonant tuning between quantum dots and the antenna is important to get the right directivity, just like tuning the classical TV antenna,” Van Hulst told physicsworld.com.

Yutaka Kadoya – a member of the Japanese team that published earlier in the year – is impressed by the speed of this new development and views it as a victory for experimental research. “Nowadays computer simulation is widespread and easily used while the experiments have become tougher and tougher. I think actual progress cannot be expected without experimental investigations.”

Kadoya believes that the next stage in the research is to home in on the quantum dot to investigate the luminescence dynamics.

This research is described in Science.

Copyright © 2026 by IOP Publishing Ltd and individual contributors