Skip to main content

Physicists unveil plans for ‘LEP3’ collider at CERN

A group of physicists from Switzerland, Japan, Russia, US and the UK has proposed using the tunnel that currently houses the Large Hadron Collider (LHC) at the CERN particle-physics lab near Geneva for a dedicated machine to study the Higgs boson. The facility, dubbed LEP3, is named after CERN’s previous accelerator, the Large Electron–Positron Collider (LEP), which used to exist in the LHC tunnel before being shut down in 2000. In a preliminary study submitted to the European Strategy Preparatory Group, LEP3’s backers say that the machine could be constructed within the next 10 years.

The plans for LEP3 come just weeks after physicists working at CERN reported that they had discovered a new particle that bears a striking resemblance to a Higgs boson, as described by the Standard Model of particle physics. The ATLAS experiment measured its mass at around 125 GeV and the CMS experiment at 126 GeV.

LEP3 would operate at 240 GeV and comprise two separate accelerator rings that would smash electrons and positrons rather than protons and protons, as with the LHC. In their study, the 20 authors call the concept for LEP3 “highly interesting” and that it deserves more detailed study. “Now is the right moment to get this on the table,” says theorist John Ellis from Kings College London in the UK, who is an author of the preliminary study and hopes that it will trigger debate among physicists as to how to study the new boson in detail.

Tunnel vision

LEP3 is designed to be installed in the LHC tunnel and serve the two LHC’s general-purpose detectors – ATLAS and CMS. If LEP3 is to be built, it will have to fight off two rival proposals for a future collider to study the Higgs – the International Linear Collider (ILC) and the Compact Linear Collider (CLIC). But Ellis says that one advantage of LEP3 is that the tunnel to house it is already built and the collider would use the existing infrastructure, such as cryogenics equipment, thus making LEP3 more cost-effective. LEP3 would also use conventional methods to accelerate particles rather than the accelerating superconducting cavities that will be employed by the ILC.

Which collider is built to succeed the LHC will depend on what the LHC discovers in the next couple of years after it has run at its full design energy of 14 TeV. If it turns out that the LHC finds only the Higgs, then Ellis says there would be a strong case for LEP3. But if more particles are discovered by the LHC – such as supersymmetric particles – it would make sense to consider the other two proposals. “LEP3 could be a more secure option than the ILC if only a Higgs is discovered,” Ellis told physicsworld.com. “But, of course, it would be foolish to choose anything now, given that the LHC has not hit full energy yet.”

CERN plans to run the LHC into the 2030s after it has undergone a major upgrade in energy and luminosity in the coming decade. However, Ellis thinks that it may even be possible for the LHC and LEP3 to cohabit for a short time. “It would not be ideal, but it could be something to think about,” says Ellis. “If the LHC does not discover anything beyond the Higgs, then would you keep running it for years?”

“Little scope”

Yet some disagree that LEP3 represents the best way to study the Higgs, adding that a decision would have to be made between building LEP3 and running the high-luminosity upgrade to the LHC in the 2020s. “They both have an excellent physics case, but somehow LEP3 presents less chance of a huge breakthrough,” says one leading CERN researcher who prefers not to be named. “[The LHC upgrade] has precision measurements as well as discovery reach to offer.”

That view is shared by linear-collider director Lyn Evans, who told physicsworld.com that he thinks it is unlikely that the proposal for LEP3 will get very far. “The first job is to fully exploit the LHC and all its upgrades,” says Evans, who led the construction of the LHC. “This is at least a 20 year programme of work, so I think that it is very unlikely that the LHC will be ripped out and replaced by a very modest machine with little scope apart from studying the Higgs.”

Graphene logic for the real world

Researchers in Italy and the US have created the first integrated graphene logic gates that work in air and at room temperature. The work represents an important milestone in the development of graphene-based logic, says the team.

The devices are also the first graphene logic gates to operate with matched-voltage input and output digital signals, explains team leader Roman Sordan of the Politecnico di Milano at Como. Such an operation is the main prerequisite for practical use of this type of gate. “Moreover, our gates are integrated on a type of graphene that can easily be grown over large areas, thus paving the way for mass production of such carbon-based electronic devices,” he says.

To continue making more-powerful computers in the future, electronics devices must be able to perform simple logic tasks at ever-faster speeds. Conventional silicon chips are limited by the speed at which electrons travel in the material, a parameter known as carrier mobility. Graphene – a sheet of carbon atoms just one atom thick – is often touted as being the silicon of the future because it could overcome this problem thanks to its unique electronic properties, which include very high electron and hole mobility.

First graphene inverter

Sordan and colleagues made the first ever graphene integrated circuit back in 2009, when they fabricated a complementary inverter – the main building block of modern digital electronics. Although fully functioning, this device was not suited for real-world applications because it was made using exfoliated graphene in a process that cannot be scaled up to industrial levels. What is more, the device did not operate with matched-voltage input and output signals. “Without such signal matching, logic gates cannot be ‘cascaded’ – that is, one logic gate is incapable of triggering its neighbour – and so complex logic functions cannot be realized,” explains Sordan.

Since this early work, other research groups have been working on producing signal matching in graphene inverters. However, even the best devices only appear to work at very low temperatures and are based on exfoliated graphene samples.

CVD graphene to the rescue

Sordan and colleagues have now made inverters from graphene grown on wafers by chemical vapour deposition – a process that is amenable to mass production. The devices are capable of digital-signal matching and also operate at room temperature and in air. But that is not all. “We have also demonstrated the highest voltage gain of 5.3 reported so far in CVD graphene under ambient conditions, which is instrumental in matching digital signals,” says Sordan. “In 2009 we only had a gain of 0.04 – and that was in exfoliated graphene devices.”

The researchers achieved their feat in a self-aligned device design similar to a graphene amplifier that they had made previously. “We have not only demonstrated signal matching in our new experiments, but have cascaded graphene logic gates into more complex circuits too,” adds Sordan. “Such results have never been seen at any temperature until now – cryogenic or otherwise.”

And if that was not enough, the team says that it has also succeeded in identifying the parameters needed for cascading.

Doubts put to rest?

“There have been some doubts in the scientific community in the past about using graphene logic gates in real-life applications,” Sordan says. “We now show that graphene logic gates can operate in everyday, ambient conditions and that these gates might also be used in a plethora of applications.” Indeed, the gates already produced have larger voltage swings than emitter-coupled logic (ECL) gates – the fastest logic family that exists today. ECL gates are used for digital signal processing at extremely high frequencies of above 100 GHz – a range that is currently inaccessible to conventional state-of-the-art CMOS technology.

One drawback of the technology that the team is working to overcome is that the graphene gates cannot yet be used for low-power applications because the power dissipation remains too high.

The team, which includes researchers from Eric Pop’s group at the University of Illinois at Urbana-Champaign in the US, describes its work in Nano Letters.

Pier Oddone to step down as Fermilab's director

By Tushna Commissariat

PW-2012-08-03-oddone.jpg

Fermilab’s director, Pier Oddone (right, image courtesy Fermilab) announced yesterday that he is retiring, after heading one of the world’s biggest particle physics laboratories for eight years. Oddone will continue in his role as director until July 2013, which ought to give a committee appointed by the Fermi Research Alliance (FRA) board of directors, who manage the lab, plenty of time to find a suitable successor. Oddone joined Fermilab as its fifth director in 2005, after a long stint at the Lawrence Berkeley National Laboratory, where he also served as lab director for a time.

His time at Fermilab has been a busy and fruitful one, with many successes for the lab’s Tevatron collider, from contributions towards finding the Higgs boson to neutrino experiments, as well as research at the cosmic frontier.

“During Pier’s eight years as director, Fermilab has made remarkable contributions to the world’s understanding of particle physics,” says Robert Zimmer, chairman of the FRA board. “Pier’s leadership has ensured that Fermilab remains the centrepiece of particle physics research in the US, and that the laboratory’s facilities and resources are focused on ground-breaking discoveries.”

“Working with Fermilab’s employees and users from across the country and around the world is a wonderful experience. It has been an honour to partner with you over the past seven years to achieve significant milestones in the performance of our accelerators and detectors and in our contributions to the Large Hadron Collider,” says Oddone in an online message to Fermilab’s staff through their newsletter.

In September last year, the Tevatron accelerator was shut down. To find out more about life at Fermilab and the rather nomadic careers of particle physicists who worked there, tune in to the podcast put together by my colleague Margaret Harris, following visits to Fermilab and CERN. You can listen to the podcast here, or download it via this link.

Can the future affect the past?

What you do today could affect what happened yesterday – that is the bizarre conclusion of a thought experiment in quantum physics described in a preprint article by Yakir Aharonov of Tel-Aviv University in Israel and colleagues.

It sounds impossible, indeed as though it is violating one of science’s most cherished principles – causality – but the researchers say that the rules of the quantum world conspire to preserve causality by “hiding” the influence of future choices until those choices have actually been made.

At the heart of the idea is the quantum phenomenon of “nonlocality”, in which two or more particles exist in interrelated or “entangled” states that remain undetermined until a measurement is made on one of them. Once the measurement takes place, the state of the other particle is instantly fixed too, no matter how far away it is. Albert Einstein first pointed out this instantaneous “action at a distance” in 1935, when he argued that it meant quantum theory must be incomplete. Modern experiments have confirmed that this instantaneous action is, in fact, real, and it now holds the key to practical quantum technologies such as quantum computing and cryptography.

Aharonov and his co-workers describe an experiment for a large group of entangled particles. They argue that, under certain conditions, the experimenter’s choice of a measurement of the states of the particles can be shown to affect the states that the particles were in at an earlier time, when a very loose measurement was made. In effect, the earlier “weak” measurement anticipates the choice made in the later “strong” measurement.

4D rather than 3D

The work builds on a way of thinking about entanglement called “two-state vector formalism” (TSVF), which was proposed by Aharonov three decades ago. TSVF considers the correlations between particles in 4D space–time rather than 3D space. “In three dimensions it looks like some miraculous influence between two distant particles,” says Aharonov’s colleague Avshalom Elitzur of the Weizmann Institute of Science in Rehovot, Israel. “In space–time as a whole, it is a continuous interaction extending between past and future events.”

Aharonov and team have now discovered a remarkable implication of TSVF that bears on the question of what the state of a particle is between two measurements – a quantum version of Einstein’s famous conundrum of how we can be sure the Moon is there without looking at it. How can you find out about the particles without measuring them? TSVF shows that it is possible to get at the intermediate information – by making sufficiently “weak” measurements on a bunch of entangled particles prepared in the same way and calculating a statistical average.

Gentle measurements

The theory of weak measurement – first proposed and developed by Aharonov and his group in 1988 – dictates that it is possible to “gently” or “weakly” measure a quantum system and to gain some information about one property (say, position) without appreciably disturbing the complementary property (momentum) and therefore the future evolution of the system. Though the amount of information obtained for each measurement is tiny, an average of multiple measurements gives an accurate estimation of the measurement of the property without distorting its final value.

Each weak measurement can tell you something about the probabilities of different states (spin value up or down, say) – albeit with a lot of error – without actually collapsing the particles into definite states, as would happen with a strong measurement. “A weak measurement both changes the measured state and informs you about the resulting localized state,” says Elitzur. “But it does both jobs very loosely, and the change it inflicts on the system is weaker than the information it gives you.”

As a result, Elitzur explains, “every single weak measurement in itself tells you nearly nothing. The measurements provide reliable outcomes only after you sum them all up. Then the errors cancel out and you can extract some information about the ensemble as a whole.”

In the researchers’ thought experiment, the results of these weak measurements agree with those of later strong measurements, in which the experimenter chooses freely which spin orientation to measure – even though the particles’ states are still undetermined after the weak measurements. What this means, explains Elitzur, is that within TSVF “a particle between two measurements possesses the two states indicated by both of them, past and future”.

Nature is fussy

The catch is that, only by adding subsequent information from the strong measurements can one reveal what the weak measurements were “really” saying. The information was already there – but encrypted and only exposed in retrospect. So causality is preserved, even if not exactly as we normally know it. Why there is this censorship is not clear, except from an almost metaphysical perspective. “Nature is known to be fussy about never appearing inconsistent,” says Elitzur. “So it is not going to appreciate overt backward causality – people killing their grandfathers and so on.”

Elitzur says that some specialists in quantum optics have expressed interest in conducting the experiment in the laboratory, which he thinks should be no more difficult than previous studies of entanglement.

Charles Bennett of IBM’s T J Watson Research Center in Yorktown Heights, New York, who is a specialist in quantum-information theory, is not convinced. He sees TSVF as only one way of looking at the results, and believes that the findings can be interpreted without any apparent “backward causation”, so that the authors are erecting a straw man. “To make their straw man seem stronger, they use language that obscures the crucial difference between communication and correlation,” he says. He adds that it is like an experiment in quantum cryptography in which the sender sends the receiver the decryption key before sending (or even deciding on) the message, and then claims that the key is somehow an “anticipation” of the message.

However, Aharonov and colleagues suspect that their findings might even have implications for free will. “Our group remains somewhat divided on these philosophical questions,” says Elitzur. Aharonov’s view, he says, “is somewhat Talmudic: everything you’re going to do is already known to God, but you still have the choice.”

A preprint of the work is available on the arXiv server.

Ultracold atoms simulate electrical conduction

Physicists in Switzerland are the first to use ultracold atoms to simulate one of the most technologically important properties of solid matter: electrical conduction. The experiment involves watching lithium atoms as they pass through a tiny channel created by laser light. The team has shown that atoms that travel straight through a channel without experiencing any disorder display ohmic conduction, just as atoms that bounce their way through a disordered channel.

Ensembles of ultracold atoms have been used to simulate a wide range of condensed-matter physics including aspects of magnetism and superconductivity. Atomic gases can provide important insights into the quantum nature of matter because, unlike the electrons in a solid, the interactions between ultracold atoms can be controlled precisely using laser light and magnetic fields.

In principle, electrical conduction could be simulated by allowing ultracold atoms to move through a channel from one reservoir to another. By measuring the change in density of the gas as it passes through the conducting region, physicists could study the conduction process in ways that are not possible with real conductors. This would be of particular interest to those designing extremely small electronic circuits where quantum effects can play an important role in conduction.

Watching the channel

In addition to creating such a system, imaging the conduction region has also proven to be difficult – and now both of these challenges have now been overcome by Tilman Esslinger and colleagues at ETH Zürich. The team used a microscope objective to obtain high-resolution images of a conduction channel that is about 18 µm wide and about 30 µm long. The channel is created by focusing a laser into two lobes – with the channel existing in the dark region between the lobes.

The channel separates two reservoirs that are filled with a gas of about 40,000 lithium-6 atoms that has been cooled to 250 nK. A magnetic field gradient is applied to the system during the cooling process, which results in a higher density of atoms in the reservoir on the right than on the left.

The field gradient is then switched off and atoms begin to conduct through the channel from right to left. An important property of the gas is that the average distance an atom travels before it collides with another atom is more than 40 times the length of the channel. This means that most atoms will fly straight through the channel like bullets – and this ballistic conduction is something that is believed to occur in tiny conductors such as carbon nanotubes.

Ohm’s law for atoms

Esslinger and team first focused on this ballistic case and found that conduction occurred in a manner analogous to ohmic electrical conduction – that is, the rate at which atoms move from one reservoir (analogous to electrical current) is proportional to the difference in the numbers of atoms in each reservoir (analogous to electrical voltage) and a constant that is related to the conductance of the channel. This analogy is strengthened by the fact that, like the electron, lithium-6 is a fermion. This means that at low temperatures the energy levels of the gas resemble that of an ensemble of electrons.

Ohmic conduction is expected when atoms jostle their way through a disordered channel scattering in a diffusive manner. Therefore it might – at first glance – seem strange that ohmic behaviour applies to ballistic atoms that don’t collide as they move through the channel. The key to understanding this, according to Esslinger, is thinking about what happens to the atoms at the boundary between the right reservoir and the channel. At this point, an atom can either fly straight through the channel or be reflected back in a quantum-mechanical process first described in 1957 by the German-American physicist Rolf Landauer.

So even though they don’t scatter as they travel through the channel, their scattering at the entrance results in an ohmic conductance. The behaviour was verified by using the microscope to measure the density of atoms throughout the channel. This revealed an abrupt change in density at the interface with the right reservoir and a relatively constant density throughout the channel. This confirms that most of the scattering occurs at the interface rather than in the channel.

Linear density drop

The team then repeated the experiment with the parameters of the channel changed so that diffusive scattering occurs. This is done by introducing laser speckle to the channel. Again the researchers measured ohmic behaviour, but with a different conductance constant. Microscope images reveal that the atomic density drops linearly along the channel – which suggests that the conduction is diffusive.

Now that they have performed the studies in an ultracold atomic gas, Esslinger and collegaues are keen to study ballistic conduction when the atoms interact with each other. This can be done by applying carefully selected magnetic fields to the atoms to create so-called Feshbach resonances.

The research is reported in Science.

If you were to give $27m to physics, what would be the most beneficial to the subject?

By James Dacey

Thumbnail image for Thumbnail image for hands smll.jpg

Nine physicists have found themselves significantly richer this week, after picking up $3m each in prize money. The nine men, listed in this blog by my colleague Michael Banks, are the inaugural winners of the Fundamental Physics Prize, a new award that was only unveiled to the world on Tuesday.

The prize is funded by the Russian investor Yuri Milner, who completed a degree in physics from Moscow State University before eventually becoming an entrepreneur and venture capitalist. Milner has made his money by investing in start-up companies, apparently finding particular success through his investments in Internet firms such as Facebook, Twitter and Zynga.

Milner’s latest project is to launch the Fundamental Physics Prize Foundation, which according to its website is “a not-for-profit corporation dedicated to advancing our knowledge of the universe at the deepest level by awarding annual prizes for scientific breakthroughs, as well as communicating the excitement of fundamental physics to the public”.

One of the recipients, Andrei Linde from Stanford University, told physicsworld.com that he hopes the prize will “increase [the] prestige and morale of all people in [the] scientific community”. Another winner, Ashoke Sen from the Harish-Chandra Research Institute in India, focused on the impact the new prizes might have on the scientific community of tomorrow. Speaking to the Indian Express, he said “I see it [the award] more as a sort of entitlement…encouragement to younger people to take interest in fundamental science.”

If you had the funds, how would you splash the cash on physics? Please let us know your opinion by taking part in this week’s Facebook poll.

If you were to give $27m to physics, what would be the most beneficial to the subject?

Prizes for high-achieving scientists
Funding a large number of PhDs
Investing in research institutions
Funding competitions that have clear targets
Supporting high school education

Have your say by visiting our Facebook page, and please feel free to explain your response – or suggest another way of spending the cash – by posting a comment below the poll.

In last week’s poll we asked whether you find that regular exercise helps you to focus when studying. The outcome was conclusive, with 91% of responds saying yes. The question was inspired by the sad news of the death of Sally Ride. Ride made history in 1983 by becoming first American woman in space, combining her physics training with her passion for physical activity.

Thank you to everyone who took part and we look forward to hearing from you again in this week’s poll.

Nanowires give vertical transistors a boost

Researchers in Japan have made an important advance in developing a new type of silicon-based transistor by successfully creating vertical transistors from semiconducting nanowires on a silicon substrate. The wires, made from indium gallium arsenide (InGaAs), are surrounded by 3D – rather than planar-shaped – gates, with the finished devices having extremely good electronic properties.

Over time, conventional microelectronic circuits, based on metal–oxide–semiconductor field-effect transistors (MOSFETs), have become ever smaller, and this is one of the main reasons for their success. However, many problems, such as off-state current leakage and the so-called short-channel effect, become more apparent as device size decreases.

To overcome these complications, the gate structure of silicon-based transistors has already gone from being 2D (planar) to 3D with the development of “fin” field-effect transistors (FETs) in the last few years. Researchers are currently looking at planar and fin architectures using compound III–V semiconductors, such as InGaAs, as alternatives to complementary metal–oxide–semiconductors (CMOS) because of their high electron mobility and excellent compatibility with existing gate-dielectric materials. A new “surrounding gate” architecture – whereby the gate is wrapped around a nanowire channel – also shows great promise. However, such structures are difficult to study because it is not easy to integrate freestanding semiconducting nanostructures, such as nanowires, onto silicon substrates.

Nanowire channels

Now, Katsuhiro Tomioka and colleagues at Hokkaido University in Sapporo have developed a new technique to grow vertical InGaAs nanowires. They have shown that it is possible to fabricate surrounding-gate transistors using these wires and core-multishell nanowires – which are made from InGaAs/InP/InAlAs/InGaAs – as channels. The channels have a six-sided structure, which has the benefit of greatly increasing on-state current and device transconductance.

The researchers measured the on–off current ratio in their devices to be about 108 – a value that is superior to that observed in devices made from similar materials with equivalent dimensions. At about 7850 cm2/V·s, the estimated field-effect mobility of the transistors is much higher than the typical electron mobility of a silicon metal–oxide–semiconductor FET. These properties mean that the new devices could be useful as building blocks for high-speed wireless networks and other sophisticated technologies, says Tomioka.

Vertical transistors

Nature recently published several review papers about new gate architectures, alternative channels and alternative switching mechanisms for future CMOS technologies,” Tomioka explains. “At a recent major conference for transistor researchers, Intel suggested that integrating group III–V material onto silicon could be the future of low-power, high-speed CMOS. The company also emphasized the importance of vertically oriented 3D transistors.”

He adds that the results are “the first to realize these predictions”. The team now plans to fabricate p-type FETs for logic operations using the new technique.

The work is described in Nature.

What is the Square Kilometre Array?

What is the Square Kilometre Array?
In less than 100 seconds, Mark Birkinshaw discusses the world’s largest telescope by area
Video Player is loading.
Current Time 0:00
Duration 1:46
Loaded: 0%
Stream Type LIVE
Remaining Time 1:46
 
1x
    • Chapters
    • descriptions off, selected
    • subtitles off, selected
    • en (Main), selected

    In less than 100 seconds, Mark Birkinshaw discusses the world’s largest telescope by area.

    Nine physicists bag $27m prize

    Andrei Linde


    Andrei Linde from Stanford University was one of nine physicists to receive the inaugural Fundamental Physics Prize. (Courtesy: L A Cicero)



    By Michael Banks

    Nine physicists just got one hell of a lot richer after bagging the inaugural Fundamental Physics Prize together with a cool $3m each.

    If you haven’t heard of the prize before, don’t worry – I hadn’t either until last Tuesday, when it was announced that Nima Arkani-Hamed, Juan Maldacena, Nathan Seiberg and Edward Witten, all from the Institute for Advanced Study in Princeton, had won the prize.

    They shared it with Alan Guth from the Massachusetts Institute of Technology, Alexei Kitaev from the California Institute of Technology, Maxim Kontsevich from the Institute of Advanced Scientific Studies in Paris, Andrei Linde from Stanford University and Ashoke Sen from the Harish-Chandra Research Institute in India. They all bagged $3m each, taking the total prize fund to a whopping $27m.

    The prize has been awarded by the Russian investor Yuri Milner, who has a degree in physics from Moscow State University but who dropped out of a PhD in theoretical physics at the Lebedev Physical Institute. After a stint working at the World Bank in Washington, DC, he turned to investing in start-up companies, apparently making his millions by investing in Internet firms such as Facebook, Twitter and Zynga.

    Milner has now set up the Fundamental Physics Prize Foundation, a not-for-profit organization that, according to its website, is “dedicated to advancing our knowledge of the universe at the deepest level”.

    The foundation has established two prizes: the Fundamental Physics Prize, which “recognizes transformative advances in the field” and which was won by the nine physicists above; and the New Horizons in Physics Prize, which will be awarded to “promising junior researchers” and carries a cash reward of $100,000 for each recipient.

    The Fundamental Physics Prize is even bigger than the annual science-and-religion gong from the Templeton Foundation, which gives a single winner $1.7m, as well as the Nobel Prize for Physics, which this year will be $1.2m (and possibly shared by three people) after the prize fund was cut by 20% from last year’s total.

    Speaking to physicworld.com, Linde says he heard that he had won the prize only a few days before the announcement. He says he was surprised by the amount of cash on offer, but added that “physicists always complained that they get less money than the football coaches of the teams of their universities”. Linde hopes that the prize will “increase [the] prestige and morale of all people in [the] scientific community”.

    This year’s winners were chosen by Milner himself, but next year’s recipients will be chosen by a selection committee of previous winners.

    So if you want to get your hands on next year’s prize, then you will have to be nominated online by someone else, but there are no age restrictions and previous winners can also win the prize again.

    Giant carbon-capturing funnels discovered in Southern Ocean

     

    A team of scientists from the UK and Australia has shed new light on the mysterious mechanism by which the Southern Ocean sequesters carbon from the atmosphere. Winds, vast whirlpools and ocean currents interact to produce localized funnels up to 1000 km across, which plunge dissolved carbon into the deep ocean and lock it away for centuries. Critically, these processes themselves – and the Southern Ocean’s ability to affect global warming caused by human activities – could be sensitive to climate variability in as-yet-unknown ways.

    Oceans represent an important global carbon sink, absorbing 25% of annual man-made CO2 emissions and helping to slow the rate of climate change. The Southern Ocean in particular is known to be a significant oceanic sink, and accounts for 40% of all carbon entering the deep oceans. And yet, until now, no-one could quite work out how the carbon gets there from the surface waters.

    “We thought wind was the major player,” says lead author of the new study, Jean-Baptiste Sallée of the British Antarctic Survey. “The ocean is like an onion – in layers – and there is very little connection between the surface and deep layers,” he explains. When strong winds displace a large slab of surface water and cause it to accumulate in a specific region, the localized bloat in the surface layer gets injected downwards into the ocean’s interior. But this kind of wind action alone should have a fairly uniform effect over vast swathes of ocean – which is not what the scientists measured.

    Subduction hotspots

    Scrutinizing 10 years of temperature, salinity and pressure data from a fleet of 80 small robotic probes dotted around the remote Southern Ocean, the researchers discovered that surface waters are drawn down – or subducted – at a number of specific locations. This occurs due to the interplay between winds, dominant currents and circular currents known as “eddies”. “You end up with a very particular regional structure for the injection of carbon,” says Sallée, describing 1000 km-wide funnels that export carbon to the depths.

    The team pinpointed five such zones in the Southern Ocean, including one off the southern tip of Chile and another to the south-west of New Zealand. Elsewhere, currents return carbon to the surface in a process known as “reventilation”, but overall, the Southern Ocean is a net carbon sink.

    Carbon bottleneck

    The mechanisms governing atmosphere-to-ocean carbon transfer – the mechanical mixing action of wind and waves, and biological uptake by micro-organisms in the sunlit top layer of water – are already well understood. The step that determines the rate of the oceanic uptake of carbon, according to co-author of the study, Richard Matear of Australia’s Commonwealth Scientific and Industrial Research Organization, is the physical transport of this dissolved carbon from the surface waters into the ocean interior. “Our study identifies these pathways for the first time,” he says.

    Understanding these subduction pathways fully is key to predicting how climate change might alter the Southern Ocean’s carbon sequestering capabilities. Both global warming and the Antarctic ozone hole increase the temperature gradient between the equator and the pole, which intensifies the southern hemisphere winds. Climate models predict that stronger winds could stir up deep waters, especially in violent seas such as the Southern Ocean, and result in a net release of carbon back into the atmosphere.

    “What we don’t know yet is the impact of climate change on eddy formation,” says Sallée. Eddies arise from oceanic instabilities caused by extreme gusts of wind, intense surface heating or cooling, or strong currents meeting uneven bottom topography, but tend to escape the granularity of even the most detailed climate models. “We can speculate that if wind increases, there will also be more eddies to counterbalance its effects,” considers Sallée. “But it’s a question we don’t know how to answer yet. And it’s a big incentive for climate models to refine their grid.”

    Improving climate models

    Ocean-carbon-cycle expert Corinne Le Quéré, director of the Tyndall Centre for Climate Change Research, UK, who was not involved in the study, echoes Sallée’s call for improved understanding of wind-eddy interplay. “Southern Ocean winds have increased in the past 15 years in response to the depletion of stratospheric ozone,” she explains, adding “There’s a lot of discussion right now about how [wind-induced changes] are then counterbalanced by changes in eddies.”

    Because of this, today’s climate models diverge when it comes to predicting the future carbon sequestration response of the Southern Ocean. “I think this [paper] is really the first time that we have such small-scale resolution in the exchange of carbon in the ocean from observation directly,” says Le Quéré. “The natural next step will be to take climate models and see how well they’re performing spatially and [temporally]…this study can really help constrain which are the good models.”

    The research is published in Nature Geoscience.

    Copyright © 2025 by IOP Publishing Ltd and individual contributors