Skip to main content

Graphene continues to amaze

Is there anything that graphene—sheets of carbon just one atom thick — can’t do? Since this wonder material was disovered in 2004, it has been shown to be an extremely good electrical conductor; a semiconductor that can be used to create transistors; and a very strong material that could be used to make ultra thin membranes. Now, researchers in the US have confirmed that graphene is also a very good conductor of heat.

The team, which had to invent a new way to measure thermal conductivity in order to study the material, is now investigating how graphene’s thermal properties could be used to cool ultrafast silicon chips (Nano Letters 10.1021/nl0731872).

Physicists had suspected that graphene can conduct heat very well because carbon nanotubes, which are essentially graphene rolled into tiny tubes, are themselves very good thermal conductors. However, graphene can be very difficult to work with and researchers had struggled to determine its thermal properties using traditional techniques that involve attaching heaters and other devices to the material.

Raman scattering

Alexander Balandin and colleagues at the University of California-Riverside have instead devised a new measurement technique that uses a laser to both heat the graphene and measure its temperature. The team suspended sheets of graphene across micrometre-wide trenches cut into a silicon-oxide surface. The sheets were several micrometres long and were pinned-down at both ends by layers of graphite, which acted as heat sinks.

The centre of the sheet is then exposed to a beam of laser light, which heats the graphene and changes the frequencies at which its carbon atoms vibrate. Some of the laser light changes frequency as it undergoes Raman scattering off the vibrating atoms and the size of the frequency shift is proportional to the temperature of the illuminated region.

Frequency shift

By measuring the frequency shift — and hence the temperature of the graphene — as a function of laser power, the team was able to calculate the thermal conductivity of graphene, which was found to be a whopping 5300 W/(m K) at room temperature. This is the highest known value of any solid — 50% higher than carbon nanotubes and more than ten times greater than metals like copper and aluminium.

Balandin told physicsworld.com that the team were surprised to find that graphene is a much better conductor of heat than carbon nanotubes — even though some theoretical work had suggested that this could be possible.

Graphene’s high thermal conductivity is probably a result of the relative ease with which atomic vibrations can move through graphene compared to other materials. Balandin and collegues are now working on a theory that explains why this is so.

Balandin believes that graphene’s high thermal conductivity, flat shape and ability to be integrated with silicon means that it could play an important role in removing heat from electronics devices. The team are also working on the design of graphene-cooled ultrafast transistors.

Feeling the force on a single atom

Researchers have long used scanning tunnelling microscopes (STMs) to move single atoms around on the surface of material with atomic-scale precision — allowing them to make nanometre-scale structures such as “quantum corrals”, which confine electrons to tiny regions on a surface. However, it has never actually been possible to measure the force required to move an individual atom, something that could improve our understanding of the structural and mechanical properties of materials.

Now, however, an international team of physicists has used a modified STM to measure the force needed to move a single cobalt atom on both platinum and copper surfaces (Science 309 1066). The breakthrough could help researchers build new nanoscale devices such as high-density magnetic memories.

An STM involves placing a tiny metal tip very near to a surface of interest and applying a voltage between the surface and tip. The tip is scanned with great precision across the surface and an image is generated by measuring the current of electrons that tunnel between tip and surface. The tip also exerts a force on the surface atoms and can be used to move individual atoms around on a surface.

Flexible tip

The strength of the tiny forces required to move an atom could, in principle, be measured by fitting an STM with a flexible tip that vibrates much like the prong of a tuning fork. When tip and atom are brought very close together, the force between the two would change the frequency of vibration — which is how some atomic force microscopes (AFMs) work. The problem is that a very stiff and stable tip is needed to move an individual atom with sufficient spatial accuracy, while a relatively floppy tip is needed to measure the force accurately.

Now, Markus Ternes and colleagues at IBM’s Almaden Research Center in California, the Swiss Federal Institute of Technology in Lausanne and the University of Regensburg in Germany hove found a way around this dilemma. To do this, the team modified a STM by mounting the tip on one prong of a quartz oscillator similar to that used to keep time in a wristwatch. The prong is about 40 times stiffer than a silicon-based cantilever used in an AFM.

Hopping atoms

The force need to move an atom from was measured by scanning the vibrating tip back and forth above a cobalt-occupied adsorption site and an adjacent empty site. At each successive pass, the tip was lowered by as little as 10 pm at a time until it hovered less than 100 pm from the cobalt atom. At first, the atom did not move, but as the tip got closer to the surface, the force exerted by the tip caused the atom to hop to the adjacent site. By looking at how the vibrational frequency of the tip changed during the hop, the team were able to determine the threshold force required to move the atom.

Using the modified STM, Ternes and his team found that it took about 210 pN (2.1 × 10–10 N) to move a cobalt atom 160 pm (1.6 × 10–10 m) between two adjacent adsorption sites on platininum. While this might not seem like much of a force, it is about 1014 times the force of gravity on a cobalt atom. A much lower force of about 17 pN was needed to move cobalt on copper.

Tiny oscillation

According to Ternes, the team’s success was down to their ability to limit the amplitude of the tip oscillation to about 25–30 pm, which is a fraction of the distance between adsorption sites. As well as allowing the team to move the atoms with great precision, this tiny amplitude allowed them to study the short-range chemical-bonding forces that affect how the cobalt moves from on site to another.

Ternes told physicsworld.com that the team plan to use the instrument to assemble atomic-scale magnetic structures on insulating surfaces – something that a standard STM cannot do because — unlike an AFM — it is unable to image and insulator. The tem believe that such structures could form the basis of very high-density data storage devices.

Earth is doomed (in 5 billion years)

Life will have fried, oceans will have boiled away, but no one has ever been sure what will happen to Earth itself when the Sun finally swells into a red giant. Now, astrophysicists from Mexico and the UK are forecasting a dismal fate for our rocky planet: it will get caught up in the Sun’s outer layers, spiral inwards and vaporize.

Like all dwarf stars, the Sun converts hydrogen nuclei into helium nuclei by fusion to produce immense amounts of radiation and outward pressure. But in five billion years or so the core will run out of hydrogen fuel, lose pressure and collapse under its own gravity. As this inward crush boosts the temperature of the core, the remaining shell of hydrogen around it will heat up and trigger a new period of fusion, which in turn will cause the Sun’s outer envelope to expand to around 250 times its current radius and cool from white to red.

Once the Sun is in this red-giant phase, Mercury will certainly be engulfed, and going on the increase of the Sun’s radius alone it would appear that Venus, Earth and Mars will suffer the same fate too. During expansion, however, the Sun is also expected to shed mass in a powerful solar wind. The resultant drop in gravity will let the orbits of the planets drift outwards, and some models suggest that Earth — and possibly Venus — might escape the fiery death. Indeed, astrophysicists have even spotted a distant solar system in which a planet with Earth’s orbital radius has survived its star’s red giant phase.

Klaus-Peter Schroeder of Guanajuanto University and Robert Smith of Sussex University are not so optimistic. They have performed calculations of Earth’s fate that include not only the favourable effects of solar mass loss, but also the speed of the Sun’s rotation, which will diminish as the Sun gets bigger. Currently completing one rotation in about a month, at red-giant size the Sun will rotate once every few thousand years, allowing the Earth’s gravity to draw out a large tidal bulge on the solar surface. Such a bulge will haul the Earth back into the Sun’s outer layers, while the drag will steadily reduce the rocky planet’s orbital angular momentum. Earth will spiral inwards until it eventually vaporizes (Mon. Not. R. Astron. Soc. to be published; preprint available at arXiv:0801.4031).

We would say this is the definitive answer to the fate of the Earth Robert Smith, Sussex University

A history of fates

This is not the first time that tidal bulges have been taken into account when predicting Earth’s fate. In 1996, Mario Livio at the Hubble Space Telescope Science Institute and colleagues also found that the effect would be strong enough to swallow up Earth. Then, in 2001, Kacper Rybicki of the Polish Academy of Sciences and Carlo Denis of the European Centre for Geodynamics and Seismology in Luxembourg suggested that the Earth would survive in spite of tidal bulges.

But Rybicki and Denis’s analyses were mostly qualitative, while the calculations performed by Livio’s group were based on an old formula for the Sun’s mass loss. Schroeder and Smith, on the other hand, have performed their calculations using a recent mass-loss equation devised by Schroeder along with Manfred Cuntz from the University of Texas at Arlington, which was calibrated using precise observations. “We’re confident that our mass-loss equation is the best that’s currently available,” Smith told physicsworld.com. Furthermore, Schroeder and Smith have consulted with Jean-Paul Zahn of the Paris Observatory, who is regarded as the authority on tidal physics, to make sure they have considered the effect of tidal bulges properly.

Still, predictions for the fate of the Earth have bandied about for decades, so is this the final say in the matter? “We would say this is the definitive answer to the fate of the Earth,” says Smith. “But I dare say someone will come up in a few years and say that we’re wrong.”

Smith points out, however, that any further developments will likely come to the same conclusion because he and Schroeder have “probably underestimated” the total drag. The solar wind, which the pair did not include in their calculations, should also hinder the motion of the Earth in its orbit and encourage it to spiral inwards.

Panel picked to review UK physics

The UK’s research councils have named the eight physicists who will take part in a review of the health of the country’s physics. The review, which will be led by physicist Bill Wakeham from Southampton University, was announced late last year by the science minister Ian Pearson. It was set up after an £80m shortfall in the budget of the Science and Technology Facilities Council (STFC) forced it to pull out of the Gemini telescopes, withdraw from the International Linear Collider (ILC), and cut university grants by 25% in particle physics and astronomy.

The panel will consist of six UK based physicists: Martin Barstow (University of Leicester), Donal Bradley (Imperial), Mike Brady (Oxford), Christine Davies (Glasgow), Carlos Frenk (Durham) and Richard Friend (Cambridge).

They will be joined by two non-UK physicists — Joergen Kjems from the Danish Technical University and Richard Peltier from the University of Toronto. The panel, which will examine the provision of physics-based facilities and how they should be funded, will take the views from a “range of individuals and organisations” both from inside and outside the physics research community.

“The review will explicitly not revisit the decision of the STFC” Bill Wakeham, vice-chancellor Southampton University, UK

Although the STFC originally said that the review would report this summer, this has been denied by Research Councils UK, which says that the review will report its finding in the autumn. That has been confirmed by Wakeham, who told physicsworld.com that it would be published in September. “However, the review will explicitly not revisit the decision of the STFC [over Gemini and the ILC],” he said. “It will be looking into the longer-term future of the subject.”

Physicists were already sceptical that the review will do anything to reverse the current situation in any case. “It is likely the review will just say that this shouldn’t happen again,” says Mark Lancaster, a particle physicist at University College London.

However, there has been some good news for UK researchers in recent weeks. Astronomers will now continue to get access to both the Gemini telescopes in Hawaii and Chile until at least July this year and discussions have begun with the Gemini board over whether access could be extended beyond that date.

After consulting physicists, the STFC is also expected to announce final details early next month of what projects will be affected by the £80m shortfall — although reversing the decision to pull out of the ILC is not on the cards.

Looking forward to Chicago 2009

BlogLast.jpg

For those of you who went to this year’s AAAS meeting in Boston, now is a chance to sip coffee, recover from jet lag and go over all those indecipherable notes you took so hastily. For those of you who didn’t go, I hope that my blog has given you a taster of the symposia pertaining most to physics.

There’s been a remarkable range of topics covered. I heard the Rwandan president Paul Kagame share his vision for scientific education in Africa, the AAAS president David Baltimore tell of the importance of US research, and Princeton University’s Harold Shapiro discuss how money should be allocated within science budgets. I saw eye-popping pictures taken with present-day supercomputer simulations, graphs depicting the awkward public opinion surrounding nanotechnology, and pamphlets informing one how to spot a Weapon of Mass Destruction. I spoke to folk artists about nuclear physics, directors about the status of forthcoming facilities, and press officers about mismatching meetings.

On a final valedictory note, I also found it refreshing to hear other journalists talk about the difficulties of reporting science, and scientists acknowledge the importance of the media in helping their cause. The general feeling at the meeting was that the media will have a great — and to a certain extent isolated — role to play in conveying important issues such as funding and climate change.

This time next year, physicsworld.com will be blogging from Chicago for the 2009 AAAS meeting. But if you can’t wait until then for more physics gossip, be sure to check-in on 10 March when physicsworld.com editor Hamish Johnston and Physics World news editor Michael Banks will be blogging from this year’s American Physical Society (APS) meeting in New Orleans.

US missile strikes stricken satellite

A missile fired from the American Navy cruiser USS Lake Erie has successfully disabled a malfunctioning spy satellite that was about to enter Earth’s atmosphere. Marine General James Cartwright said there was an 80-90% chance that the missiles, fired from the Pacific Ocean west of Hawaii, had hit the satellite’s fuel tank.

Officials had cited the tank’s 450 kg of hydrazine as the key reason for disabling the satellite. Had the fuel tank reached Earth, the toxic gas could have harmed individuals close by. Cartwright said that debris from the stricken satellite appeared to be too small, however, to cause damage on Earth.

The firing represented an early and serendipitous rehearsal of American anti-missile defence. The SM-3 missile used to hit the satellite is under development as part of the American Navy’s sea-based missile defence system. It is intended to provide protection against medium- and long-range ballistic missiles.

Cartwright said that the missile collided with the 2250kg satellite some 210 km above the Earth’s surface, at a combined speed of 35,000 kilometres per hour. “The technical degree of difficulty was significant here,” he said.

“It’s a brilliant public relations opportunity for the ballistic missile defence system.” Ivan Oelrich, Federation of American Scientists

However, the difficulty of hitting a satellite in low orbit hardly compares with that of striking a missile in flight. “This is easier than the ballistic missile intercept that the system was designed for,” explains Ivan Oelrich, a security specialist at the Federation of American Scientists. “The satellite is a substantially larger target — probably between the size of a minivan and a small school bus. It’s also on a very predictable trajectory. And it’s high enough to predict with high accuracy where it will be.” The US Department of Defense chose to fire from a ship because the ship could be aligned with the track of the satellite.

Missile showcase

Oelrich is among critics who see the event as less a technical test than a chance for the Bush Administration to showcase its hotly disputed missile defence system. “I don’t think the public safety argument holds up. The US produces 16 million kilograms of hydrazine per year, which we ship around in trucks,” Oelrich says. “But it’s a brilliant public relations opportunity for the ballistic missile defence system. We were told that it has saved us from a grave threat.”

Critics have also suggested that the US destroyed the satellite because it feared that its ultrasecret technology could be made public if large parts of the satellite survived the plunge through the Earth’s atmosphere. That, however, seems unlikely. Analysts suggest that such technology would automatically be adapted to burn up before reaching Earth.

More troubling is that the event appears to be a response to China’s use of a ground-based missile to destroy a satellite last year. “We should be working on a treaty to ban antisatellite weapons,” Oelrich says. “But the Bush Administration opposes it and says it couldn’t be verified.”

Scientists probe fireballs with X-rays

Scientists have been baffled for centuries about the strange drifting balls of light that appear occasionally during thunderstorms. Theories put forward so far suggest that this “ball lightning” is either a moving electrical discharge or that it is some kind of self-contained object. Now, research from an Israeli group is making the latter seem more likely. The scientists have created artificial fireballs and then used the European Synchrotron Radiation Facility (ESRF) in Grenoble to analyse their composition.

Although a rare phenomenon, ball lightning has been glimpsed by several thousand people around the world, each of whom seems to recount a different set of properties. Diameters range from a centimetre to over a metre, colours are anything from red to white to blue, while motion encompasses both vertical descent and horizontal meandering. The fireballs have even been seen entering buildings via chimneys and windows. However, such sightings are always unexpected, so there is seldom an opportunity to observe ball lightning systematically.

A number of research groups are therefore trying to recreate ball lightning in the lab. Two years ago, electrical engineers Eli Jerby and Vladimir Dikhtyar of Tel Aviv University in Israel were able to make artificial fireballs by focusing microwaves onto substrates made from silicon and other solids placed inside a shoebox-sized cavity. They melted part of the silicon by delivering microwaves through a metal tip that, when pulled away, could drag some vaporized silicon with it. This created a column of fire that eventually detached to form a buoyant, quivering fireball coloured orange, red and yellow.

Now, Jerby and Brian Mitchell from the University of Rennes 1 in France — together with other researchers from the ESRF, Tel Aviv and Rennes — have tested the composition and properties of their fireballs by installing the cavity, which contains a substrate of borosilicate glass, in a beamline at the ESRF. After passing X-rays through the cavity and generating diffraction patterns, they discovered that the fireballs contain about 109 particles per cm3, each of which has an average diameter of about 50 nm (Phys. Rev. Lett. 100 065001).

They believe that this observation supports a theory put forward by John Abrahamson, a chemical engineer at the University of Canterbury in New Zealand, proposing that ball lightening occurs when ordinary lightning vapourizes carbon and silicon oxides within soil, allowing the carbon to chemically reduce the silicon into its elemental form. The silicon atoms then cool, condense and group together into nanoparticles, which oxidise in the surrounding air and give off thermal radiation.

This latest research does not solve the mystery of ball lightning, however. Whereas many witnesses have reported glowing orbs that persist for several seconds — sightings that back up Abrahamson’s theory — Jerby and Dikhtyar’s fireballs glow for just 30–40 ms once the microwave source is turned off.

The Israeli group speculate that the mechanism responsible for the longevity of ball lightning in Abrahamson’s theory is masked in their experiments. Abrahamson points out that as the silicon nanoparticles oxidize, the rate at which further oxygen molecules can reach the silicon diminishes, thereby slowing the dissipation of the nanoparticles’ chemical energy. Jerby says that this oxidation process may occur while the silicon is being illuminated with microwaves and that their chemical energy is therefore almost spent by the time the microwave source is removed.

In addition to this problem of fireball lifetime, Abrahamson’s theory has still to explain exactly how ball lightning can pass through windows, walls and other objects. Jerby, however, is optimistic that these problems can be overcome and that eventually real ball lightning will be recreated in the laboratory. He also believes that his group’s work could have important practical applications, such as producing nanoparticles directly from solid materials.

Researchers create ‘self-healing’ rubber

Rubbery materials can be easily stretched, but it is not easy to mend them when they break, as anyone who has ever had a punctured car tyre will know. Now, however, researchers in France have created a unique new rubber-like material that can “self-heal” at room temperature. If the material is snapped in half, the two torn pieces can be made to mend themselves simply by bringing the broken surfaces back in contact with each other (Nature 451 977).

The new “supramolecular rubber” has been created by Ludwik Leibler and colleagues at the Ecole Supérieure de Physique et Chimie Industrielles (ESPCI/CNRS) in Paris, France. It consists of “fatty acids” — short chains of carbon atoms — linked together via hydrogen bonds to form a macroscopic 3D network. The material behaves just like an ordinary rubber in that it can stretch to several times its normal length when pulled.

But if the material is cut in half, the two broken pieces of the rubber can self-heal when brought together and simply held in contact for a few minutes. The fracture mends and the material can be stretched and pulled in all directions again. “It is important to stress that the material is not self-adhesive,” Leibler told physicsworld.com. “The surfaces of the material are never sticky to the touch and feel like a rubber band or a plastic bag. Self-mending is possible even 12 hours after the fracture occurred.”

Magic healing

Conventional rubbers usually consist of long polymer chains linked together by chemical bonds. However, the new material consists of small molecules that can link to two or more other molecules through hydrogen bonds, which are much weaker. If the material is fractured, any “open” hydrogen bonds on the surface of the broken material seek out other non-linked open hydrogen bonds, allowing the two broken halves to reform.

“The potential applications are manifold,” say Justin Mynar and Takuzo Aida of the School of Engineering at the University of Tokyo, writing in the same issue of Nature. “Tears in clothes that effectively stitch themselves together, long-lasting coatings and paints for houses and cars, and to take one example on the medical front, self-repairing artificial bones and cartilage.”

Other applications include adhesives and cosmetics, in ink-jet printing, electronics and building materials for the construction industry. The work was carried out in collaboration with the chemical company Arkema, which is now planning to commercialize some products and materials based on this supramolecular technology.

Vector inflation points the way

You’ve got to love physics humour. To mark Valentine’s day last week, Fermilab’s in-house magazine ran a spoof personal ad that stated: “mature paradigm with firm observational support seeks a fundamental theory in which to be embedded”. It was referring to inflation — a period of exponential expansion thought to have taken place 10–35 s after the big bang, which, although able to account for the large-scale appearance of the universe, lacks a firm theoretical footing.

By chance, that same day three theorists posted a paper on the arXiv preprint server that could help remedy this situation. Viatcheslav Mukhanov and co-workers at Ludwig Maximilians University in Munich, Germany, have proposed a model in which inflation is driven by vector fields as opposed to scalar fields as it is in existing models (arXiv:0802.2068v1). Although their model does not fundamentally explain inflation, vector fields are already known to exist in nature whereas scalar fields are not.

Smooth and flat

Inflation, which was developed in the early 1980s, smoothes out anisotropies that were present immediately after the big bang by causing the universe to expand by a factor of least 1030 in a fraction of a nanosecond. Without inflation, it is difficult for cosmologists to explain why the universe looks roughly the same no matter which direction they look or why the geometry of the universe is essentially flat.

But the underlying theory is somewhat ad hoc. In most models, the rapid expansion of the early universe is driven by “negative pressure” produced when the potential energy of a scalar field drops from one state to a lower one. Quantum fluctuations in this field, with which is associated a fundamental particle with zero spin called the inflaton, would have been blown up to cosmic scales and produced density perturbations that later caused matter to clump together into galaxies.

“Ours is not a better model than scalar-field inflation, but it is also not worse.” Viatcheslav Mukhanov, Ludwig Maximilians University

This picture has recently gained strong support from measurements of the distribution of hot and cold spots in the cosmic microwave background. But physicists have never seen spin-zero particles and therefore have no real clue about the microscopic origins of inflation.

On the other hand, fundamental vector fields — which lead to spin-one particles — are known to exist in nature: the photon and the W and Z bosons, for example. The trouble is that vector fields tend to “pick” a preferred direction in space and therefore ruin inflation’s biggest selling point. Mukhanov and colleagues have now overcome this problem by invoking several vectors which, when averaged, lead to a nearly isotropic universe.

Random orientations

According to Larry Ford of Tufts University in the US, who in the late 1980s came up against the problem of anisotropy himself when attempting to build a vector-inflation model, the new model is analogous to the pressure in a gas. “A gas consists of many molecules, each with a specific velocity, but when the effects of the various molecules are averaged the result is a pressure that is isotropic to a very high degree of accuracy,” he explains.

“Ours is not a better model than scalar-field inflation, but it is also not worse,” says Mukhanov, who was one of the first to calculate the fluctuations in the inflaton field. “In addition, the model allows us to get a certain amount of anisotropy during inflationary expansion which could modify the produced perturbations and thus have observational imprints in the cosmic microwave background.”

Because the model requires many vector fields orientated in random directions to produce the required isotropy of the large-scale universe, its predictions depend heavily on the way the masses of these fields are distributed, which is not a desirable property. The upside is that the presence of fields with extremely small masses naturally accounts for the current phase of cosmic acceleration as well as inflation, although team member Alexey Golovnev is quick to point out that scalar fields can also do this job.

Optical lattice beats atomic-clock accuracy

Researchers in the US have built a new optical clock from strontium atoms that is accurate to about one part in 1016 — meaning it would neither gain nor lose a second in more than 200 million years. While the clock is not the world’s most accurate — that honour goes to an optical clock based on a single mercury ion, which is accurate to several parts in 1017 — the strontium device is the world’s most accurate clock to use neutral atoms.

The clock, which was built by Jun Ye and colleagues at JILA and the University of Colorado in Boulder, is based on thousands of strontium atoms that are trapped in an “optical lattice” made from overlapping infrared laser beams (Sciencexpress). The atoms are bathed in light from a separate red laser at a frequency corresponding to an atomic transition in strontium — which causes this light to lock into the precise frequency of the transition. The oscillation of this light is just like the ticking of a clock — the first neutral-atom timekepeer to be more accurate than the standard caesium-fountain atomic clock.

3.5-km optical fibre

The team determined the clock’s accuracy by sending its time signal via a 3.5-km underground optical fibre from JILA to the National Institute of Standards and Technology (NIST) in Boulder, where the signal was compared to that from an optical clock based on neutral calcium atoms. This is the first time that researchers have been able to compare signals from two optical clocks separated by several kilometres in this way.

The time signals from the clocks were compared using two “frequency combs” located at either end of the fibre link. Each clock produces light at different frequencies and the combs are used to convert the signals to a lower common frequency. After being shifted to the lower frequency, the strontium signal was amplified using a laser and then transmitted to NIST.

Adding anti-noise

The two signals were then combined and the physicists looked for signs of interference — or “beating” — which occurs if the time signals have slightly different frequencies. Ye told physicsworld.com that a key challenge in transmitting the signal was to ensure that it was not disrupted by noise in the optical fibre. This was done by monitoring the noise in the fibre and introducing an “anti-noise” signal to cancel it out.

The team is now trying to improve the precision of the clock — a measure of how reliable its time signal is — by increasing the number of atoms used. The team also plan to use their technique to compare the strontium clock to a mercury-ion clock at NIST.

New standard

Most physicists believe that an optical clock will someday replace the caesium-fountain atomic clock as the time standard. That is because the stability of an atomic clock is proportional to its operating frequency, which means that clocks based on narrow transitions at optical frequencies will be much more stable than those based on much lower frequency microwave transitions. Clocks need to be both stable and accurate, with greater stability making it easier and faster to work out how accurate a clock really is.

However, several different optical clocks being developed around the world and no clear frontrunner has emerged. The best single-ion clocks are currently more accurate than their neutral-atom counterparts , but their signal comes from just one ion, making them noisier and perhaps less practical than neutral-atom clocks.

Patrick Gill of the UK’s National Physical Laboratory, which is also developing optical clocks, described JILA’s strontium clock as an “impressive step on the way [to an optical standard]”, but added that it “may not be a clear winner”. According to Gill, while strontium is a popular candidate for a next-generation time standard, there are about a dozen candidates for the job — both neutral atoms and ions.

Copyright © 2025 by IOP Publishing Ltd and individual contributors