Skip to main content

Nano pioneers give food for thought

By Louise Mayor

This week I was at the scientific opening of the Centre for Nanoscience and Quantum Information (NSQI) at the University of Bristol. The event coincided with the Bristol Nanoscience Symposium 2010, and featured great talks from some of the pioneers of nanoscience and nanotechnology.

Rohrer_NSQI.PNG
(Left) Nobel Laureate Heinrich Rohrer declared the centre officially open. Photo credit: Jesse Karjalainen. (Right) The NSQI centre itself – the labs are out of sight and sound in the basement. Can you spot the nano-inspired architectural feature?

At the opening event on Monday evening, IBM Fellow Charles Bennett talked about how to make quantum information “more fun and less strange”. His educational analogies included the idea of monogamy in quantum information – that the more entangled two systems are with each other, the less entangled they are with any others. “The lesson is this: two is a couple, three is a crowd”, he said. He also talked about how information doesn’t get lost in quantum systems but does in classical ones – how it’s like there are eavesdroppers, and it’s harder to factorize when someone’s looking over your shoulder.

The stage was then passed over to Heinrich Rohrer (pictured), a figure revered by many in the audience. It was Rohrer, along with Gerd Binnig, who invented the scanning tunnelling microscope – an instrument that can image and manipulate single atoms – for which they were co-recipients of the Nobel Prize in Physics in 1986. Rohrer was at the time at IBM’s Zurich lab.

Rohrer commented about the “nano” revolution – that some say it’s hype, while others are more relaxed about it. “Let us not make a discipline out of ‘nano’ ”, he warned. He also said that the new trend is for people to operate using claims and catchphrases rather than careful explanations; he noted that in all his reading of Einstein’s papers he never once found words such as “new” or “unique”.

In his closing comments, Rohrer proposed a litmus test for the centre’s success. He said that if the NSQI can attract a good number of female nanoengineers and nanomechanics then it is a good sign of interesting research being done at the centre – and then you’re on the right track for the future. He then declared the centre officially open, and we all piled in to the centre for champagne and a tour of the labs, which are described in a previous blog entry: Visiting the quietest building in the world.

But for me, the most exciting talk was given the following day by Stanley Williams of Hewlett-Packard (HP)…

(more…)

Fresh water may have cooled North Atlantic

The decrease recorded in the Earth’s temperature between the 1940s and 1970s was caused by a sudden cooling of the oceans in the northern hemisphere. That is the verdict of climate researchers in the US and UK, who have found that that much of this cooling could have been brought about by a rapid influx of fresh Arctic water into the North Atlantic. Others, however, dispute this finding, arguing that the temperature drop can be explained by longer-term ocean phenomena and human pollution.

Although the surface of the Earth is some 0.8 °C warmer than it was at the beginning of the 20th century, that rise in temperature has not been steady. Increasing until the 1940s, it then fell slightly over the next 30 years, before climbing again from the mid-1970s onwards. This pattern is widely regarded as an underlying warming of the planet by increased concentrations of atmospheric greenhouse gases, combined with a mid-20th century cooling brought about by a rise of sulphate aerosols in the troposphere – which reflect some solar radiation – as well as fluctuations in the world’s oceans that take place over decades. The fact that mid-20th century cooling was confined to the northern hemisphere seems to support the aerosol hypothesis, given that the fastest rise in emissions of sulphate aerosols was seen over North America, Europe and Asia.

However, David Thompson of Colorado State University and colleagues believe that this analysis is based on a flawed reading of changes to the temperature of the northern hemisphere’s oceans. They point out that identifying trends in surface temperatures requires removing the effects of short-term events that are cyclical or random, such as the El Niño oscillation and volcanic eruptions. But they maintain that this removal usually involves smoothing out temperature fluctuations, which means that brief, but non-cyclical, changes are also erased.

Too much smoothing?

Team member John Wallace of the University of Washington in Seattle likens this to a simplification of stock market data. “To see the longer-term market trends it is better to smooth out much of the daily ups and downs,” he says. “But if you smooth the data too much you do not see sudden changes such as stock market crashes.”

The researchers devised a new data analysis technique that they say gets rid of the unwanted cyclical phenomena while preserving any rapid fluctuations that contribute to longer-term change. They found that the sea-surface temperatures in the northern hemisphere did not decrease gradually in the decades following the Second World War, as would be expected if aerosols were principally responsible for the northern hemisphere cooling, but that temperatures fell very rapidly – by about 0.3 °C between 1968 and 1972.

The team has found this sudden drop both in the temperature data of the northern hemisphere in isolation and in the difference between the northern and southern temperatures. It also established that it could not be a mere artefact of the measurement process, as it says is the case for a sudden drop in global sea temperatures recorded immediately after the Second World War, which they attribute to a change in the relative numbers of British and American ships at sea. The rapid dip around 1970, it says, is present in all available historical data sets of sea-surface temperature and cannot be linked to any known biases in temperature measurements.

Outpouring of cold, fresh water

As to what caused this sudden drop in temperature, Wallace says that it might have been partly due to an outpouring of relatively cold, fresh water from the Arctic into the Atlantic that was known to have taken place about the same time. But he is reluctant to attribute all of the cooling to this process. He and his colleagues found that most of the cooling took place in the Atlantic but that some also occurred in the Pacific, which would not have received any of the fresh water.

Dan Hodson of the University of Reading in the UK points out that some climate models also predict rapid hemispheric cooling as a result of fresh water being deposited in the north Atlantic. He describes the latest research, which he was not involved in, as “another piece of the puzzle in the ongoing effort to decipher the 20th century climate record” and says that it will “improve our understanding of the underlying detailed mechanisms of climate change”. And he warns that “we should be especially vigilant” if Arctic fresh water can indeed cause rapid ocean cooling. He says that the freshwater released from a rapidly melting Arctic might mitigate some of the impacts of global warming but points out that a very rapid melting could have widespread negative effects such as reducing crop yields.

However, Michael Mann of Pennsylvania State University in the US is not convinced. He believes that some scientists have overestimated the significance of multidecadal ocean oscillations on global temperatures but maintains that these oscillations can explain the rapid temperature drop around 1970 identified by Thompson’s team, adding that fairly abrupt temperature changes earlier in the 20th century can also be explained purely in terms of such oscillations.

The work is described in Nature.

Surprises from Planet Mercury

By Margaret Harris in Rome

The European Planetary Science Congress is taking place this year just a stone’s throw from Rome’s Imperial Forum, so it’s appropriate that the scientific programme is speckled with Roman gods and goddesses – specifically Mercury, Venus, Mars, Titan and Saturn. With missions to all these heavenly bodies dominating the agenda, picking a session to attend wasn’t easy, but in the end I plumped for Mercury, hoping to learn more about the smallest, hottest planet in our solar system.

I wasn’t disappointed. It turns out that NASA’s Messenger mission is already reshaping our understanding of Mercury, even though the spacecraft isn’t due to enter Mercury’s orbit until 18 March 2011. Prior to that momentous date, however, the spacecraft performed three flybys, and the data collected during those intentional near-misses have revealed – among other things – a planet that was far more volcanically active, for far longer, than scientists had previously thought.

An earlier mission, Mariner 10, had helped define the image of Mercury as a dead, cratered lump of rock, more like the Earth’s Moon than a “proper” planet. Now that the Messenger flybys have mapped over 90% of its surface (compared to Mariner 10’s 50%), a more complex picture is emerging. Mercury did, in fact, have active volcanoes early in its history; indeed, volcanic activity was so extensive that the top 5 km of the planet’s crust is mostly the remains of pyroclastic flows, with some impact ejecta (stuff kicked up when meteors and so on hit) thrown in. And some of these flows are quite recent, at least by Mercury’s standards – less than 1 bn years old, which makes it younger than some rock formations on Earth.

Later talks in the same session added to the impression of a surprisingly complex planet, and it’ll be interesting to see whether the surprises keep coming once Messenger gets into its stride next year.

Fermilab boss unveils financial plan for Tevatron extension

Fermilab director Pier Oddone has unveiled a financial plan that would let his lab run the Tevatron collider for an additional three years. The plan involves delaying two upcoming experiments – the NOvA neutrino study and the Mu2e muon experiment – and using $25m per year in freed-up funds to help keep the collider running. The Tevatron machine, which collides protons and antiprotons, was due to shut down in September 2011.

Oddone’s plan is, however, not guaranteed because the lab will first have to find an additional $35m per year for the three-year extension. That money can only come from the US Department of Energy (DOE) and will need the blessing of President Obama and the US Congress.

“Securing the additional resources involves several steps and considerable uncertainty,” writes Oddone on the Fermilab website. “We could get a ‘no’ that is final at any point along these steps, but a ‘yes’ will be final only when Congress appropriates the funds.”

According to Oddone, Fermilab is already in discussions with the DOE, which will in turn be seeking advice from its High Energy Physics Advisory Panel (HEPAP). “Assuming that the advice is positive, we will not have any solid information until the president announces the FY12 budget in February 2011,” says Oddone.

Extension problems

Plans to extend Tevatron have been bandied about for some time and Oddone’s public comments on the issue had been lukewarm in recent weeks. Now, however, Oddone appears to have accepted the recommendation made last month by Fermilab’s Physics Advisory Committee (PAC), which strongly endorsed a three-year extension to 2014. While acknowledging the Tevatron’s “remaining promise for the future”, Oddone had called the non-binding recommendation “very problematic for us”, because an extension could hinder the lab’s transition towards projects on the so-called intensity frontier.

One such project is NOvA, which is designed to study neutrinos produced when a 700 kW beam of protons from Fermilab’s Main Injector accelerator collides with a graphite target. If the Tevatron is still running when NOvA starts in 2013, the available beam power would drop to about 400 kW, thereby sharply reducing the amount of data NOvA collects in its first 18 months.

However, the PAC concluded that this “would not mean robbing NOvA of a discovery”, says committee member and University of Rochester particle physicist Regina Demina, adding that competitors like Japan’s T2K are already better placed to make early progress in the field.

Two experiments better than one

Still, a delay for NOvA would be a blow to the neutrino community, says David Wark, a physicist at Imperial College London who is part of theT2K collaboration. “Of course, I would like T2K to make any discoveries first, but from a broader perspective it is critical to have multiple complementary experiments,” he says. T2K spokesperson Takashi Kobayashi agrees, noting that NOvA will be able to measure some things – such as which flavour of neutrino is lightest and which is heaviest – that T2K cannot.

A similar argument could, however, be made for the Tevatron and its competitor, CERN’s Large Hadron Collider (LHC). If the Higgs boson is heavier than about 140 GeV, it is more likely to decay into pairs of W or Z bosons, and the LHC should be able to detect this signal easily in the debris of proton–proton collisions. A lighter Higgs is more likely to decay into pairs of b-quarks, which would favour the Tevatron.

Extending the Tevatron would also affect the Mu2e experiment, which will create a beam of muons. The particles will be captured in atomic orbits in a foil target, where some may decay to an electron without the production of neutrinos. Such neutrino-less muon decay has never been seen in the lab and its discovery could point towards new theories of particles physics beyond the Standard Model.

Glowing review

By Louise Mayor, Grenoble

LM_Cherenkov_542.PNG
(Left) Suited and booted and (right) Cherenkov radiation from an old reactor core

When I awoke last Thursday morning I didn’t expect that by the end of the day I’d have seen a nuclear reactor. And I don’t just mean looking at a big concrete building from the outside – I saw stuff glowing and had to wear a funny-looking suit and booties.

I was visiting the Institut Laue-Langevin (ILL) in Grenoble, France, where atoms are split not to generate electricity but to use the neutrons in experiments. In fact, that’s the whole purpose of the ILL reactor, known as a “neutron source”.

But why neutrons? Being electrically neutral, neutrons can penetrate deep into matter, right to the nuclei of atoms. Charged particles, in contrast, get scattered by atomic electrons. Neutrons can be thought of as a particle or wave, and with a wavelength on the order of Angstroms like those produced at the ILL, they interact with crystal structures to form a diffraction pattern as described by Bragg’s law. This pattern can be used to find out the positions of atoms in a sample, as well as how they move.

One application of neutron scattering I heard about was to look inside a turbine blade that’s been subjected to a projectile frozen chicken – the experimental version of a real-life, unlucky stray pigeon or seagull. Neutrons have been used to probe inside the turbine blade without having to interfere with it further by cutting it apart.

Hall_AirLock_542.PNG
(Left) Neutron goings-on and (right) leaving via the air lock

Upon leaving the reactor hall I went back out through an air lock; there is a lower pressure inside the building so that if there is some leak, gas goes in and not out. Not quite the end of it, I then had to put my hands in a hole each and watch a progress bar slowly fill the screen ahead before I got the reassuring confirmation: “NOT CONTAMINATED”.

Another way to generate neutrons is “spallation”, where protons are accelerated towards a heavy metal target and knock neutrons off atomic nuclei. This method will be used in the European Spallation Source (ESS), which Sweden and Denmark won the bid last year to co-host in Lund, Sweden. To find out more, you can watch this video where the ESS is introduced by none other than Sir Patrick Stewart.

Lunar mission sheds light on early solar system

The geology of the Moon and the dynamics of the early solar system are much clearer now thanks to three new studies of the lunar surface by NASA’s Lunar Reconnaissance Orbiter (LRO). As well as providing insight into how and when lunar craters formed, the LRO has given researchers the opportunity to investigate the complex geological history of the Moon.

The Moon formed about 4.5 billion years ago and its surface holds a wealth of information about the chaotic history of our solar system – which formed just 50 million years earlier.

“Much of the scientific importance of the Moon arises from the fact that its ancient surface preserves a unique record of conditions in the first billion years of solar system history that more active planets, such as Earth and Mars, have long lost”, says Ian Crawford of the earth and planetary sciences department at Birkbeck College London, who was not involved in the research. “Studies of the Moon are therefore important for improving our understanding of the early history of the Earth–Moon system, and thus of Earth itself, as well as of the Solar System as a whole.”

Launched in June 2009, NASA’s LRO is fixed in a low orbit around the Moon and is currently acquiring a 3D map of the lunar surface using a number of different instruments.

Oldest regions of the moon

The global distribution and frequency of lunar craters was the focus of a study by researchers from Brown University, the Massachusetts Institute of Technology, and NASA Goddard Space Flight Center. Led by planetary geologist James Head, the team compiled a comprehensive catalogue of 5,185 lunar craters with a diameter of 20 km or larger. This was done using data from the Lunar Orbiter Laser Altimeter (LOLA), one of a suite of instruments onboard the LRO.

This catalogue makes it possible to identify both the oldest regions of the moon, and the oldest basins and craters. Such information is crucial for planning future missions to the Moon, which could be sent to such craters to collect samples. Analysis of this material could prove invaluable in determining early solar system dynamics and fully understanding the Moon and other bodies.

The team also investigated the size and frequency distribution of the material that struck the Moon during its early bombardment about 4 billion years ago. Craters present on older surfaces were found to be mainly caused by larger projectiles. This lends credibility to a previously suggested theory that the ratio of larger and smaller objects striking the Moon differed during the first billion years of its existence. In all, the findings “are telling us something about the infancy of the solar system,” says Head.

Seeking out silicics

Two new studies of the minerals present in lunar soil and rocks have also been made using the LRO. One study was led by Benjamin Greenhagen of NASA’s Jet Propulsion Laboratory and analysed data from the LRO’s Diviner Lunar Radiometer Experiment (DLRE), which detects infrared radiation. As such, it is able to distinguish between different rock types and was used by Greenhagen and colleagues to identify regions of the lunar surface rich in silicics – igneous rock that forms from magma.

The other study was led by Timothy Glotch of Stony Brook University and focused on four silicic-rich areas. They found that the mid-infrared spectra from these regions can be best explained by the presence of certain compounds – specifically quartz, silicon-rich glass and alkali feldspar. The wide variety of landforms with these silic-rich regions suggests that magma once flowed both above and below the lunar crust. According to the researchers, this is a much greater level of igneous and geological activity than had been previously thought.

Targeted studies are next

“LRO is giving us the highest resolution of imaging of the surface from orbit, and new insights into the processes that formed the lunar surface. It is showing that the Moon is very geologically diverse,” says Neil Bowles of the University of Oxford, who worked on both mineralogy studies. “In the future, LRO will be going into a targeted science mode allowing more time to examine specific regions revealed during the initial mapping phase of the mission.”

The work is described in three papers published in Science.

Ancient star poses galactic puzzle

An ancient star in our galaxy’s halo harbours isotopes of barium that shouldn’t be there – at least according to our conventional understanding of nucleosynthesis. That’s the conclusion of an international team of astronomers who have spotted isotopes in a 13-billion-year-old star that should only be produced in stars that were formed later in the history of the Milky Way.

“Our observations completely contradict the theory,” says Andy Gallagher at the University of Hertfordshire in the UK. He and Sean Ryan, also at Hertfordshire, and their colleagues in the US and Japan studied a star in the constellation Libra named HD 140283. The team used data from the High Dispersion Spectrograph mounted on the Subaru Telescope in Hawaii.

Located 190 light-years from Earth, the star formed long before the explosion of other stars had given the galaxy much iron. As a result, the star’s iron content is just 1/400 that of the Sun.

HD 140283 is so old that it should not contain any barium that other stars produced via the slow process (s-process) of nucleosynthesis. This process occurs in red giant stars that convert helium into carbon and oxygen. These nuclear reactions release a slow flux of neutrons that strike a red giant’s iron nuclei, slowly transforming them into heavier elements including barium.

Too old to inherit s-process material

But it takes at least 40 million years for a star to become a red giant and cast this barium into space. Because HD 140283 is so old, it should have formed before red giants had started producing s-process barium, so the star should have inherited none.

Instead, any barium in HD 140283 should have been produced in the rapid process (r-process), which occurs when massive stars explode. Unlike red giants, massive stars die soon after their birth, so small quantities of r-process barium should have been in the gas that created HD 140283.

However, a 1995 study by Pierre Magain of the University of Liege in Belgium found that nearly all of HD 140283’s barium arose via the s-process. Later work by other astronomers said just the opposite – but now using a high-resolution, high signal-to-noise spectrum of light from the star, Gallagher’s team concludes that Magain was right after all.

Odd versus even

Gallagher and his colleagues came to this conclusion by measuring the relative abundance of the star’s barium isotopes. Two even isotopes – barium-134 and barium-136 – arise solely from the s-process, whereas three others – barium-135, barium-137 and barium-138 – arise from both the r- and s- processes. The even and odd isotopes absorb slightly different wavelengths of light, so their proportion affects the exact shape of the barium lines in the star’s spectrum.

But the effect is subtle. “I met a fellow isotope researcher,” said Gallagher, “and his comment to me was simply: ‘You poor, poor soul’.”

It’s a hideously difficult measurement Christopher Sneden, University of Texas at Austi.

“I don’t particularly like the result,” added Gallagher. “But that’s what the data suggest.”

Christopher Sneden of the University of Texas at Austin says the astronomers did a first-class job. “It’s a hideously difficult measurement,” says Sneden. “The authors have done absolutely the most thorough job I’ve ever seen anybody try on this.”

From another galaxy?

Still, Sneden says the result may not be so puzzling. He says the early galaxy was probably patchy. This star may have formed in a part of the galaxy that happened to be fairly free from supernova debris rich in r-process barium. In addition, it may have garnered its s-process material from gas enriched by a passing red giant. Gallagher says the star may even have come from another galaxy.

In any event, HD 140283 is no stranger to controversy. In 1951 American astronomers Joseph W Chamberlain and Lawrence Aller found that it and another halo star, HD 19445, had 1/100 of the Sun’s iron abundance – a finding so radical it never got published because at the time astronomers thought all stars had the same composition.

Instead, under pressure from the referee, Chamberlain and Aller moderated their claim, saying instead that the stars had 1/10 of the Sun’s iron content. As we now know, the original figure was closer to the truth.

Gallagher’s team will publish their work in Astronomy and Astrophysics. A preprint is available at arXiv: 1008.3541.

Pope's astronomer hits the bar

pope astronomer [Desktop Resolution].JPG

By James Dacey, Birmingham

This is the Pope’s astronomer, Brother Guy Consolmagno. And yep, those are the Maxwell equations printed on his t-shirt with the punchline: “and there was light”.

Consolmagno was answering questions tonight in the informal setting of a student union bar as part of the British Science Festival, which is currently taking place in Birmingham.

He was a lot more candid than I expected, describing the Pope as “a really great guy” who reminded him of Ludvig von Drake, the Walt Disney cartoon duck that is fascinated by knowledge.

On the customary question of how he squares his religious belief with the pursuit of rational scientific facts, Consolmagno jokingly compared it to separating his nationality from his favourite football team.

On a more serious note later in the evening, Consolmagno said that he sees no reason for conflict between science and Catholic teachings, which interpret the Bible rather than taking it literally. He was keen to distance the Catholic church from creationist views, which he described as a “much more modern idea”.

Consolmagno is one of 12 scientists working within the Vatican observatory where he also has the task of curating the Vatican’s meteorite collection. His own research involves studying the physical properties of meterorites, in particular how they form from dust in the absence of water and significant pressures.

When asked whether he thought there could be life on other planets he said is comfortable with the idea. He says the notion that liquid water beneath the icy surface of Jupiter’s moon Europa could offer habitable conditions for life is one of the most fascinating questions in physics.

On the question of whether he would baptize an alien, Consolmagno says “yes, but only if they asked”.

Consolmagno says it is coincidence that his appearance in Britain is at the same time as the ongoing papal tour, which will also be visiting Birmingham this Sunday.

Photon pairs take a quantum walk

A new optical chip that allows pairs of photons to take a quantum walk has been unveiled by an international team of physicists. The tiny device contains an array of 21 coupled optical waveguides and could provide greater insight into quantum interference. Further in the future, the technology could find use in quantum computers.

A quantum walk is a variation of the familiar classical random walk – but involving quantum particles such as photons, electrons or atoms. A particle entering a beam splitter, for example, has a 50% chance of either veering right or left. If it is a classical particle it would unambiguously take one of the two paths. A quantum particle however, is placed in a superposition of both paths until a measurement is made. If each of the two paths leads to two more beam splitters, then the particle is in a superposition of four paths and so on.

If the particle paths are all very close together, the superpositions will overlap each other and the resulting interference will have an effect on the eventual position of the quantum particle.

Searching with quantum walks

Such quantum walks could be used in quantum-computing search algorithms. In principle these could solve certain problems in N1/2 steps – whereas a classical computer would require N steps. However, maintaining the quantum nature of the particle as it makes a large number of steps remains a significant experimental challenge.

Now, Jeremy O’Brien and colleagues at the UK’s Bristol University, Tohuku University in Japan, the Weizmann Institute in Israel and Twente University in the Netherlands have demonstrated quantum walks of two identical photons in an array of 21 parallel waveguides each about 700 μm long.

Individual waveguides are separated by about 2.8 μm, which means that light can leak between paths – putting a single photon into a superposition of two paths. And crucially, the waveguides are close enough together to allow quantum interference between different superposition states.

A key challenge in making the chip, according to O’Brien, was how to get the light into and out of the waveguides – which are so close together that they cannot be fitted with photon detectors or optical fibres.

How to bend light?

To make a connection, individual waveguides must be fanned out to a separation of at least 125 μm (see image). The bends must be very gentle to ensure that the light is reflected from the waveguide walls rather than escaping into them. Indeed, conventional waveguides (silicon dioxide cores clad in silicon) would have to be several metres long in order to separate the paths by 125 μm – not very useful for making tiny chips.

The team got around this problem by making its waveguides from silicon oxynitride. The result is a much higher refractive index contrast between core and cladding, which allows more severe bends and a device that is only a few millimetres long.

To test the device, the team injected pairs of identical photons into two adjacent waveguides and then watched to see from which waveguides the pairs would emerge. Two photons were used because the quantum walk of one photon is indistinguishable from the outcome expected if light is treated as a classical wave.

Suitable delay

When two identical photons enter the array at the same time, quantum interference requires that the photons cannot take certain paths. The result is the characteristic pattern of a quantum walk seen by O’Brien and colleagues. The team then suppressed the quantum interference by introducing a suitable delay between the two photons. In this case the researchers saw the pattern expected from the classical interference of light.

The team now plans to look at quantum walks with three or more photons. “Each time we add a photon, the complexity of the problem we are able to solve increases exponentially, so if a one-photon quantum walk has 10 outcomes, a two-photon system can give 100 outcomes and a three-photon system 1000 solutions and so on,” explained O’Brien.

Two legs faster than four in the quantum world

Molecular machines could be capable of quantum mechanical tunnelling – something normally seen in very tiny particles like electrons and atoms. That’s the claim of researchers at the University of California, Riverside who have made and studied “two-legged” and “four-legged” nanomachines.

The tunnelling behaviour has never before been seen in devices so big and is a fundamental departure from mechanics in the macroscopic world, says team leader Ludwig Bartels. It also means that such machines could move much faster than expected.

Molecular machines are found everywhere in biology. For example, the acid in our stomachs is produced by a proton pump in cells that line the stomach. And in every cell in the body, proteins are dragged to the place where they are needed using kinesin motors. These biological motors consist of thousands of atoms and are much too big to be studied using computer models.

Bartels’ team wanted to understand the basic principles behind natural molecular machines so that could develop similar, artificial devices. The researchers, together with chemist Michael Marsella, made small, easy-to-study, molecules that can “walk” and carry “cargo” when placed on a flat copper surface held in vacuum. “This is a very much more simplified set-up than in biology, where molecules need to attach themselves everywhere in 3D and where all kinds of other molecules are floating by,” explains Bartels.

Molecules walk the line

Two years ago, Bartels and colleagues found that anthraquinone (a very common molecule, tons of which are used in the pulp industry) could walk across copper surfaces in a straight line. This was an important result in itself because normal molecules tend to move around randomly in all directions. Moreover, anthraquinone can attach carbon dioxide molecules to its two oxygen atoms, or “legs”, and drag this cargo along too while it walks. However, the researchers were puzzled as to why the molecule moved so fast.

The researchers then studied pentacenetetrone – another widely available molecule – which has not two oxygen “legs” as in anthraquinone but four. To their surprise, Bartels and co-workers found that this “quadrupedal” molecule, which moves like a pacing horse (both legs on one side of the molecule move together followed next by the two legs on the other side), moved a million times slower than “bipedal” anthraquinone.

According to the Riverside researchers, this huge difference in speed occurs because portions of the bipedal anthraquinone simply tunnel through barriers (like surface roughness or corrugations) in the environment rather than climbing over them. Although the quadrupedal molecule can coordinate its four “hooves” so that it paces forward, it cannot coordinate both legs so that they tunnel through a surface barrier at the same time. This means that the molecule needs to move its oxygen legs in the conventional way – over the barriers.

One foot at a time

“The bipedal molecule is a million times faster because it only needs to move one foot at a time,” Bartels told physicsworld.com. One foot can just start tunnelling forward at any time and make a successful step.

It is like driving on a bumpy road with the wheels of your car going through the bumps rather than over them Ludwig Bartels, University of California, Riverside

“This is a fundamental departure from mechanics in the macroscopic world and greatly speeds up movement,” he added. “It is like driving on a bumpy road with the wheels of your car going through the bumps rather than over them. Quantum mechanics allows such behaviour for very light particles, like electrons and hydrogen atoms but could it also be relevant for big molecules like anthraquinone?”

Artificial molecular machines like these might find applications in microelectronics, for example in data storage, or in medicine for drug delivery. However, real-world devices still may be a way off – perhaps 10 years away, says Bartels.

The team now plans to make longer molecules – “think millipedes rather than horses” – that might be driven with light.

The work is reported in J. Am. Chem. Soc. DOI: 10.1021/ja1027343.

Copyright © 2026 by IOP Publishing Ltd and individual contributors