Skip to main content

Carbon sheets offer cool solution for computers

Two-dimensional sheets of carbon atoms could be used to cool computer chips faster than any other material used in electronics today. That is the view of researchers in the US and France, who have measured graphene’s thermal conductivity when placed on silica – a commonly employed substrate in electronic devices. They found that graphene conducts heat more than twice as well as copper wires, which are routinely used in electrical interconnects, and 50 times better than silicon thin films.

Graphene consists of a single, flat sheet of carbon arranged in a honeycombed lattice. Since the material was first created in 2004, its unique electronic and mechanical properties have amazed researchers, who have been eyeing up graphene for a host of device applications. In particular, it could be used to make ultrafast transistors because the electrons in graphene behave like relativistic particles with no rest mass, which means that they whizz through the material at extremely high speeds.

Even though all-graphene electronics might still be a distant dream, the material could be used to dissipate the unwanted heat generated by conventional silicon circuits, which can otherwise slow electronic devices and make them unreliable. Previous measurements had shown that free-standing graphene boasts a thermal conductivity of up to 5000 Wm–1K–1 at room temperature – higher even than diamond, which is nature’s best heat conductor. However, in practical applications, graphene needs to be interfaced with composites or other substrates, such as silica (SiO2).

Measurements by Li Shi at the University of Texas at Austin, along with colleagues at Boston College and the French atomic-energy commission (CEA), now show that graphene in contact with SiO2 has a thermal conductivity of 600 Wm–1K–1. Although this figure is not as high as freestanding graphene, it still outdoes that of bulk copper, which has a thermal conductivity of around 400 Wm–1K–1 and is widely used to cool computer chips, and that of copper thin films (typically below 250 Wm–1K–1). The conductivity is reduced because phonons – quantized vibrations of the crystal lattice – “leak” across the graphene-substrate interface.

Novel approach

Shi and colleagues obtained their results using a novel method to measure the thermal conductivity of graphene supported on a SiO2 beam. The researchers began by first attaching a single graphene layer to the SiO2 by mechanically shaving off (or “exfoliating”) layers. They then measured the thermal conductance of the combined structure before etching away the graphene layer and re-measuring the conductance of just the SiO2.

The difference between the two values gave the thermal conductance (in Watts per Kelvin) of the graphene alone, which was then used to determine the thermal conductivity (in Watts per metre per Kelvin) by taking into account its length, width and thickness.

The team also developed a theoretical model to understand the measurement results, which showed that out-of-plane “flexural” vibration modes in the graphene are important for the material’s high thermal conductivity. These vibrations are suppressed when graphene is interfaced to another material.

The researchers are now looking at how the support layer affects graphene that has more than one layer of carbon atoms (so-called “few-layer” graphene). “We expect that the interface interaction effect will become weaker for the top layers in supported few-layer graphene,” Shi told physicsworld.com. “This means that the effective thermal conductivity of supported few-layer graphene could be even higher than that of supported, monolayer graphene.”

Ravi Prasher of chip manufacturer Intel in Chandler, Arizona, who was not involved in the work, says that the new study is “remarkable” because it combines thermal, structural and mechanical phenomena into one theoretical framework. “[It] is a crucial first step towards explaining the thermal conductivity of supported graphene,” he says.

The work was published in Science.

Ice mission blasts off

A satellite that will monitor changes in land- and sea-ice levels launched from the Baikonur Cosmodrome in Kazakhstan at 15:57 CET today. The €135m CryoSat-2 satellite, built by the European Space Agency (ESA), will be used to discover the extent to which the Antarctic and Greenland ice sheets are contributing to global sea-level rises and will also measure tiny variations in the thickness of ice floating in the polar oceans.

Around 90% of the thickness of floating sea-ice lies below sea level. This part of an ice floe is known as the “draft”. The aim of CryoSat-2 is to measure the thickness of the remaining 10% of the floe that is above sea level, known as the “freeboard”. Knowing the depth of the freeboard then allows researchers to work out the total sea-ice thickness and estimate the floe’s mass.

Weighing 700 kg, CryoSat-2 will orbit the Earth around its poles 720 km above sea level. CryoSat-2 will measure the depth of the freeboard using its main instrument – the Synthetic Aperture Interferometric Radar Altimeter (SIRAL). What SIRAL does is to send a burst of microwave pulses every 50 microseconds towards Earth. The returning echoes are then used to measure the distance between the satellite and the sea-ice to construct a 3D map.

Over the next three years, CryoSat-2 will measure changes in the thickness of sea-ice to an accuracy of a few centimetres to detect whether the ice is thinning or getting thicker. CryoSat-2 will use the same technique to measure changes to the thicknesses of huge land-ice sheets such as those in the Antarctic and Greenland.

“We are very much looking forward to delivering the data the scientific community so badly needs to build a true picture of what is happening in the fragile polar regions.” says physicist Richard Francis, project manager of CryoSat-2.

CryoSat-2 – the name comes from the Greek kryos meaning cold or ice – is the satellite’s second incarnation after CryoSat-1 was destroyed by a launch failure five years ago. In 2006 the ESA decided to rebuild the satellite and launch it in 2009 but further delays have postponed the launch until today.

The mission is the third of seven Earth-monitoring satellites that form the ESA’s Earth Explorer programme. The first in the series, the Gravity Field and Steady-state Ocean Circulation Explorer (GOCE) was launched in March last year, while the second, the Soil Moisture and Ocean Salinity (SMOS) spacecraft, was launched last November. Researchers expect CryoSat-2 to relay its first data in only a few days’ time.

Iconic Japanese antenna shrunk to the nano-scale

Researchers in Japan have built a nano-scale version of the classic antenna used to pick up TV signals in millions of homes around the world. The device, which is based on the “Yagi-Uda” antenna, could lead to novel sensors by manipulating the light emitted by individual molecules.

The Yagi-Uda antenna was invented by Japanese scientists in 1926 to overcome signal degradation, which can be a problem of radio signals degrading when transmitted over long distances. It was used by the British with radar during the Second World War and went on to become the standard antenna for transmitting and receiving television signals.

Key to the classic design is “parasitic elements”, made from strips of electrical conductors. These elements induce currents in the presence of a radio signal, which, in turn, generate secondary radio signals that can be transmitted in the same direction as the original signal. The same principle works in reverse so the antenna can boost a signal when receiving information.

Yutaka Kadoya and his colleagues at Hiroshima University have adapted the Yagi-Uda design to control light at the nano-scale by replacing the conducting strips with an array of five gold nanorods. The nanorods are aligned in such a way that incoming light manages to trigger plasmons in the gold surface – that is collective wavelike motions of billions of electrons – to resonate and emit secondary light in the same direction. The researchers demonstrate the technique for red light, which has a wavelength of 662 nm.

“The scale does not affect the phenomena due to the scaling principle in Maxwell’s equations – the reduction in size simply leads to a reduction of the relevant wavelength of the waves,” says Kadoya.

Geoffroy Lerosey, a materials scientist at the Institut Langevin – ESPCI, in France, is impressed that the researchers have been able to adapt a macro-scale design and apply it to the nano-scale. “Thinking about downscaling a system is not that hard,” he says. “But doing so and being able to experimentally demonstrate it is definitely much more complicated, given the difficulties that one experiences with nanostructures.”

The researchers believe that controlling light signals at this scale could lead to new sensors that couple to light-emitting particles, such as fluorescent molecules, which are already used for imaging and detecting in the health sciences. They intend to develop the research by seeking ways of integrating fluorescent molecules into their device.

This research is published in Nature Photonics.

Calling all citizen scientists!

chair.jpg
Artist’s impression of ε Aurigae. Credit: NASA/JPL–Caltech

By James Dacey

Citizen science projects sound great in principle, but I often wonder just how much Joe Bloggs (or Joe Schmoe) can really contribute to scientific understanding, and whether we can really help the professionals without selling our homes to fund all the specialist gear.

Well, here’s a project where we apparently can.

Citizen Sky launched last year by the American Association of Variable Star Observers to help solve a mystery that has puzzled astronomers for the past 175 years.

The mission is for the public to help the professionals to work out what’s going on with a rare star system known as epsilon Aurigae, which could hold important clues about stellar structure and how stars evolve. Epsilon Aurigae is visible with the naked eye even in the most light-polluted of cities, so you don’t even need a telescope, say the organizers.

Let me give you the back story.

Stars don’t like to be alone and an estimated 60% of stars can be found in either binary or multiple star systems. However, from our vantage point here on Earth, only 0.2% of these systems are eclipsing – that is, a star darkens from our point of view when a second star or other astronomical body sweeps across our line of vision. These eclipses are very useful because they allow astronomers to determine many things about the star such as its temperature, luminosity and even the presence of distant planets.

But epsilon Aurigae is rarer still because since its discovery in 1821 astronomers have never been quite sure what is doing the eclipsing. Eclipses in this system occur roughly once every 27 years and last for the unusually long time of two years. This suggests that the eclipsing object is larger than the stars themselves, and one of the stars is permanently hidden over a range of wavelengths.

The latest eclipse of epsilon Aurigae began last August and a paper released yesterday in Nature reports the first direct images of the eclipse and the researchers suggest it is caused by a large dust cloud passing in front of the binary system. However, there are still many questions regarding the properties and form of this dust cloud – and this is where the citizen science comes in.

This July and August will see the middle of the eclipse and this is the best time to pin down the properties of the dust cloud, which the researchers believe is the remnants of an accretion disc. Astronomers want all the observations they can get, including information from keen amateurs. “With a good pair of eyes and a finder chart – which we will give you – you can monitor this eclipse,” says Rebecca Turner, the Citizen Sky project manager.

You can hear more about the kind of data they are looking for on the project website.

Could we have predicted L'Aquila?

chair.jpg
L’Aquila Credit: RaBoe Wikipedia

By James Dacey

On the anniversary of the earthquake that devastated L’Aquila in central Italy, the Guardian has run an intriguing article about the scientist at Gran Sasso laboratory who apparently predicted the disaster, but was unable to warn the public because of a gagging injunction.

Giampaolo Giuliani, a scientific technician living near L’Aquila, has long argued that imminent quakes can be predicted by rising levels of radon gas emissions near the fault zone.

Giuliani was so convinced of the truth in this theory that he built two radometers at his own expense and positioned these along the fault zone.

On Sunday 5th April, 2009, Giuliani became deeply anxious that a large quake would strike within 24 hours, and he made urgent calls to his friends and colleagues.

Tragically, one week earlier, the Italian authorities had issued a gagging injunction on Guiliani, following a previous false alarm for an earthquake in the region to the south east of L’Aquila.

At 3.32 a.m. on 6th April, an earthquake measuring 6.2 on the Richter scale struck L’Aquilla, killing 307 people and injuring 1500 others.

One telling quote in the Guardian article is attributed to Walter Mazzochi, deputy leader at Italy’s National Institute for Geophysics and Vulcanology: “The things Guiliani has presented are at a very low level, from a scientific point of view. I didn’t see any evidence that the method could work.”

This indictment probably says a lot about the standing of earthquake prediction, which is clearly an underdeveloped area of science.

The proposed link between radon levels and earth slipping, however, will hopefully get a fair testing now that Nobel Laureate Georges Charpak has recently unveiled a new detector that could be deployed en masse along fault lines to monitor the levels of this gas.

Tiny water desalination device could help aid efforts

Each year, two million people – mostly children – die from water-borne diseases, such as diarrhoea and cholera, according to the United Nations. The particularly vulnerable include those people trapped in disaster-stricken areas, such as victims of the recent earthquake in Haiti, who struggled to get clean water after damage to water resources. However, a technique that produces drinking water from seawater, using just small amounts of energy, could lead to a portable technology that could help to address this dire situation.

The technique, developed by researchers in the US and Korea, manages to desalinate water using a simple electronic system on a tiny chip. The process starts by passing water along a tiny channel on a polymer chip – with a width of just 500 µm – until it reaches a junction, which then splits off into two separate tubes. By applying an electric potential along one of these tubes, salt ions are dragged towards this channel in the form of brine, while desalinated water flows down the second channel under the force of gravity.

To demonstrate the technique, the researchers created one chip that successfully converted seawater, with a salinity of 30,000 mg/l, into fresh water with a salinity of less than 600 mg/l, which meets the international standards for water purity.

Highly efficient

The technique, dubbed ion concentration polarization (ICP), compares favourably with established methods of water desalination in terms of energy consumption, requiring less than 3.5 Wh/l. Reverse osmosis, for example, which works by forcing seawater through a membrane at high pressures to capture the salt, requires 10–15 Wh/l. And electrodialysis, which works by transporting salt ions from one solution to another by means of ion-exchange membranes, requires 5 Wh/l.

In addition to removing salts, ICP can also remove potentially harmful larger molecules such as cells, viruses and bacteria. Reverse osmosis and electrodialysis can also remove these particles too but, in both cases, the membranes become heavily clogged by these particles.

The next challenge for the researchers is to scale up their device into a viable technology. Because one unit produces just 10 μl per minute, the researchers estimate that they will need 10,000 combined units to produce a useful amount of water, while keeping the device portable – they estimate it would be 30×20 cm.

Scaling up

Sung Jae Kim, one of the researchers at Massachusetts Institute of Technology, tells physicsworld.com that building a larger device should be relatively straightforward, and they will have produced a 100-unit device within two years. “We have already experienced a lot of interest from companies – we welcome discussions, but we need first to undergo more testing,” he says. One important aspect is to ensure that all dangerous hydrocarbons and heavy metals are also removed from the seawater, which is not guaranteed with the existing device.

Mark Shannon, a water purification researcher at the University of Illinois at Urbana-Champaign sees great potential in the new technique. “The people of first benefit will be travellers in the developing world, and small scale medical clinics and emergency shelters and orphanages in times of crisis, such as Haiti.”

Shannon agrees, however, that more development is needed to ensure the purity of the desalinated water. “You would still likely need a barrier such as a nanofiltration membrane to ensure near zero pathogens coming through, which is essential for highly contagious enteric viruses and microbes like cholera that can cause severe illness and death if any gets through.”

This research is published in Nature Nanotechnology.

New twist to electron beams

Physicists in Japan have for the first time generated beams of electrons displaying the fundamental physical property of orbital angular momentum. Like light beams before, these electron beams have had their wavefronts distorted so that they spiral through space and create a “phase singularity”. They could be used to make more powerful electron microscopes.

Beams of light possess a property known as spin angular momentum, which is associated with the direction in which the light is polarized. However, light can also carry “orbital” angular momentum. This comes about from twisting a beam’s wavefront, the imaginary locus of points on a wave possessing the same phase. In contrast to the simple plane wavefront of a collimated beam, this kind of wavefront rotates around a central axis and leads to what is known as a “phase singularity” at the centre of the beam, a type of vortex where the intensity of the wave is zero and its phase is undefined. Such spiral-like waves have been used in a number of applications, including the “optical spanner” – a light beam that traps and rotates particles – and higher-dimensional encoding in quantum optics.

Spiral phase plate

Plane waves can be converted into these spiral waves by passing them through a tiny curved ramp known as a “spiral phase plate”, with the height at any point on the ramp proportional to the angle at that point. Building this structure for light waves is relatively straight forward, since it can be carved out of silicon using the lithographic techniques employed by the semiconductor industry.

But doing the same for electron beams is more difficult. Quantum mechanics tells us that electrons, like any other kind of particle, have an associated wave, but their wavelength will tend to be much smaller than that of light. This means that the phase plate also needs to be smaller. Electrons with an energy of 300 keV would require a ramp made of silicon to be just 100 nm high.

Masaya Uchida and Akira Tonomura of the RIKEN institute in Wako, Japan, have used an alternative approach. Rather than attempting to build a smooth spiral they instead built a step-like structure – the equivalent of a miniature spiral staircase.

They crushed graphite from a pencil into fine fragments and laid them on a copper grid coated with a carbon film. The result is a graphite square made up of a number of smaller squares of varying thickness, with the top-left square being the thickest, the top-right square being the second thickest and so on in a descending clockwise spiral. The typical difference in thickness of adjacent squares, in other words the typical thickness of their graphite films, was between 10-100 nm (Nature 464 737).

Screw-type phase singularity

To demonstrate their phase plate, they accelerated a beam of electrons to an energy of 300 keV (corresponding to a wavelength of about 0.002 nm) and then split the beam in an electron biprism. One half of the beam was sent through the phase plate and the other half remained as a reference plane wave. Directing the two beams to a screen and observing the interference pattern, the researchers saw the tell-tale sign of a screw-type phase singularity – a Y-shaped defect in which a new fringe started at the location of the phase singularity.

Miles Padgett of Glasgow University describes the research as “highly interesting” and says it could lead to enhanced electron microscopy. He points out that low image contrast in optical microscopy can be partly overcome by exploiting the phase rather than the intensity of the light transmitted by an object, and believes that the new work might “create new opportunities for the use of phase imaging in electron microscopy”.

Uchida acknowledges that while their spiral-staircase technique has allowed them to demonstrate the reality of twisted electron waves, and therefore of electron orbital angular momentum, it is not precise enough to reproduce such waves reliably. He says that focused ion beams might be capable of producing spiral wave plates of sufficient precision.

Indeed, he looks forward to the production of a variety of differently-shaped electron wavefronts, comparing the waves generated in the current work to the corkscrew-shaped fusilli pasta. “Just as there are many types of pasta, so there are many shapes of electron wave,” he says, “such as double-helix or U–shaped wavefronts.”

The work is described in Nature 464 737.

Science meets innovation at Stanford Photonics Research Center

It’s 50 years since the birth of the laser and to mark the imminent anniversary physicsworld.com will be cranking up its coverage of photonic science, technologies and applications over the coming weeks.

For starters, there’s our latest video exclusive, a vox pop with faculty and students at the Stanford Photonics Research Center (SPRC), part of Stanford University in California and home to one of largest photonics research programmes in the US.

SPRC’s Ginzton Laboratory is the focal point for that programme and an interdisciplinary research team that comprises around 40 professors and 200 graduate students and postdocs. Theirs is a wide-ranging brief – SPRC working groups span information technology, telecommunications, integrated photonics, microscopy, neuroscience and solar cells – though with a common objective: to partner with industry to bring innovative photonic technologies to market.

With innovation a defining metric, a sizeable slice of SPRC’s activity comprises contract research funded by industry. The centre has 20+ commercial partners, among them the likes of SONY, Agilent Technologies, Lockheed Martin and NTT Communications.

Partnership is the key word here. The affiliates don’t just give their name to SPRC or sponsor a meeting, they actively support the research programme. Tom Baer, executive director of SPRC, reckons the affiliates are a “unique interface” between industry and the science and engineering taught at Stanford.

“SPRC provides the opportunity for students to work closely with our [industry] affiliates…and helps the students become exposed to scientific and technical problems that are current and relevant to the commercial sector,” he told physicsworld.com.

Equally significant, the SPRC reflects the research culture at Stanford, which has been cross-disciplinary for many decades. And that multidisciplinary effort is more than just a bunch of people from different disciplines working together, it’s about nurturing teams of “multidisciplinarians”.

“You really need to encourage the physicists to learn the biology, the engineers to learn the chemistry and so on,” Baer explained. “Something I instill in SPRC students from the off is that the more unique fields of enquiry you have knowledge of, the more you become unique in the eyes of your employers.”

Applying physics to biology: optical tweezers and single-molecule biophysics

Steven Block’s team at the Stanford Photonics Research Center (SPRC) is pioneering a new area of biology known as single-molecule biophysics. Underpinning that endeavour are laser-based optical tweezers (also known as optical traps) used to capture, measure and manipulate proteins and nucleic acids one molecule at a time.

Bright stuff: LCLS ready to shine

Stanford’s Linac Coherent Light Source (LCLS) is an X-ray free-electron laser that produces X-ray pulses more than a billion times brighter than the next brightest synchrotron sources. As atomic physicist Phil Bucksbaum explains, LCLS is also “the world’s first laser able to interrogate atoms and molecules simultaneously on their natural time scale and length scale”.

Copyright © 2026 by IOP Publishing Ltd and individual contributors