Skip to main content

Paradoxical pigeons are the latest quantum conundrum

First there was Schrödinger’s cat, now an international team of physicists has come up with a new animal-related paradox involving “quantum pigeons”.

For nearly a century students have struggled to understand the many counter-intuitive implications of quantum physics. Perhaps the most famous paradox is Schrödinger’s cat, whereby a cat being both dead and alive at the same time illustrates the fact that a particle can exist simultaneously in two quantum states.

Now, Jeff Tollaksen of Chapman University in California and colleagues in Israel, Italy and the UK have proposed an equally bizarre scenario dubbed the “quantum-pigeonhole effect”. The paradox begins with the observation that when you put three pigeons in two pigeonholes, there will always be at least two pigeons in the same hole. But according to the team’s quantum analysis, it is possible for none of the pigeons to share a hole.

“It’s one of those things that seem to be impossible,” says Tollaksen. But it is a direct consequence of quantum mechanics and, he adds, “It really has immense implications.”

Nondeterministic measurements

Classical physics is deterministic. This means that measuring the initial state of a system will, in principle, tell you everything you need to determine the final state. But in 1964 Yakir Aharonov of Chapman University and Tel Aviv University helped discover that in quantum mechanics, you can choose initial and final states that are entirely independent, Tollaksen says.

Now Aharonov has teamed up with Tollaksen and colleagues to use this and other concepts of quantum mechanics to postulate the quantum-pigeonhole effect. They reckon that the effect will arise when an observer makes a sequence of measurements while trying to fit three particles in two boxes. First, you make an initial, “pre-selection” measurement of the locations of the particles. Next, you can perform an intermediate measurement to see whether two particles share a box. Finally, you make a final, “post-selection” measurement of the locations. You can make the pre-selection and post-selection measurements such that they are completely independent. In the intermediate step, you can make what’s called a weak measurement to look at all three particles simultaneously. And when you do, it turns out that no two particles share a box.

Spooky and profound

The implications of these results, Tollaksen says, complement the well-known Einstein–Podolsky–Rosen (EPR) paradox. In this scenario, two particles that start in the same place can become intimately correlated, a relationship called entanglement. Measuring the state of the first particle seems to influence the state of the second one, even if they are subsequently separated by distances so great that it would be impossible to explain the influence using classical physics. This unsettling conclusion led Einstein to call entanglement “spooky action at a distance”.

“EPR is one of the most profound discoveries in science,” Tollaksen says. “But that’s only half the story.” The quantum-pigeonhole principle creates a somewhat opposite situation, he explains. Three particles can begin separated with no connections or correlations at all. You bring them together and force them to interact by squeezing them in two boxes. During this intermediate stage, they are more strongly correlated than classically possible. But in the final stage, they are not correlated at all.

The implications of the EPR paradox are important and shape our understanding of information and the fundamental physics of matter. Although it is too early to predict every implication, he believes that the quantum-pigeonhole principle could prove to be just as influential – if not more so. “This is at least as equally profound, if not more profound,” he says. It implies a new concept of correlation that is surprising.

Electronic pigeons

To verify their conclusions, Tollaksen and colleagues propose an experiment in which three electrons travel through an interferometer. This is essentially a beam splitter that creates two separate paths for the electrons, which then meet again.

Because there are only two possible paths, you would expect at least two electrons to share a path. If so, then the two will be close together and interact: their identical electric charges will repel each other, slightly deflecting their trajectories. Then physicists will be able to detect these deflections when all three electrons reunite after the paths converge. But, Tollaksen says, because their calculations show that no two of the three electrons will actually follow the same path, no deflections will be observed.

Physicists have not done these experiments yet, but Tollaksen is confident in their results. “I’m sure it will be confirmed experimentally very soon,” he says.

The new results seem “fascinating,” says Leonard Susskind of Stanford University. “I would guess that the new effect is a serious step in understanding quantum correlations.”

The research is described on the arXiv preprint server.

'Outspoken' scientist reveals his Hollywood life

Photograph of Caltech cosmologist Sean Carroll at the Cheltenham Science Festival in 2014

This blog is a shameless plug for the latest Physics World podcast, in which I talk to Sean Carroll – the California Institute of Technology cosmologist who also serves as a science adviser to Hollywood.

I chatted with Carroll when he was in the UK speaking at the recent Cheltenham Science Festival and, in the podcast, you can find out about his favourite science-fiction films and why he thinks it’s important to get the science in such films right. Carroll also reveals who he thinks he’s most like in TV’s The Big Bang Theory.

(more…)

Sean Carroll’s guide to making better science movies

Carroll, who’s a cosmologist at the California Institute of Technology, has worked on films such as Thor, Avengers Assemble and TRON: Legacy. In this podcast, he talks to Physics World about what makes a good science-fiction film and why getting accurate science doesn’t mean ruining a story line. Carroll also discusses his role on TV’s The Big Bang Theory and reveals which character on the show he thinks he resembles most.

CERN accelerators come alive for LHC restart

CERN’s 27-km Large Hadron Collider (LHC) is gradually restarting after being shut down for 16 months following a major maintenance and upgrade programme. CERN scientists hope that the upgrade – costing SwFr 150m (€124m) – will now boost the energy of the collider to the full design energy of 13 TeV.

The LHC is at one end of a chain of proton accelerators that support a large number of diverse experiments. Some of these experiments have also undergone significant upgrades. Others are new and have goals ranging from finding physics beyond the Standard Model to developing electronic components that are resistant to radiation damage.

The LHC has been out of action since February 2013 when it was turned off following a successful three-year run operating at 7 TeV. Although the collider was not running at its full design energy of 13 TeV, this was enough in July 2012 to enable scientists to announce the detection of the elusive Higgs boson particle, which had first been theorized in 1964 and led to François Englert and Peter Higgs sharing the 2013 Nobel Prize for Physics.

Methodical restart

Having upgraded the LHC, which involved consolidating 10,000 superconducting magnet interconnections, CERN is now methodically restarting the accelerator chain step by step and conducting tests before the planned resumption of full operations next year. The first stage in a proton’s journey to the LHC is the Proton Synchrotron (PS), which was fired up in mid-June. Several experiments that use protons from the PS are already taking data, including AIDA, which is a test bed for new particle-detector technologies. The long-running ISOLDE radioactive ion-beam facility on the PS is expected to resume operations by the end of July. Meanwhile, two new irradiation facilities – IRRAD and the CHARM – are nearing completion and should be ready in September.

Protons from the PS are also fired into a block of metal to create high-energy antiprotons, which are then slowed by the Antiproton Decelerator (AD). The AD is now in the process of being powered up and should be fully operational by 19 August. The AD supplies antiprotons to five experiments. Stefan Ulmer who works on two of them – BASE and ASACUSA – told physicsworld.com that the commissioning process at the AD is proceeding as planned. BASE is a new experiment that aims to measure the magnetic moment of the antiproton and should start gathering data in early September.

Extremely precise measurements

Early last month, CERN also began to power up the Super Proton Synchrotron (SPS), which accelerates protons from the PS and feeds them to the LHC. The physics programme at the SPS will begin again in October and will include the new NA62 experiment. NA62 is looking for new physics beyond the Standard Model of particle physics by trying to make an extremely precise measurement of the probability that a positively charged kaon will decay to a positively charged pion plus a neutrino/antineutrino pair.

From early 2015 the beam will be back at the LHC, and in spring 2015 the physics programme will restart at the LHC’s four experiments.

“The machine is coming out of a long sleep after undergoing an important surgical operation,” says Frédérick Bordry, CERN’s director for accelerators and technology. CERN’s main objective for next year is to run the LHC at 13 TeV – a level that it is hoped will enable deeper studies of the Higgs boson and the hunt for supersymmetric particles.

There is much more about CERN’s NA62 experiment in this video:

Yeah but no but yeah but no but…

The various regions at the edge of the solar system

Has the Voyager spacecraft left the solar system and entered interstellar space? I don’t know about you, but I’m getting a teensy weensy bit bored by this question, which has been going on for years now.

Last September, we blogged about a paper in Science that, yep, it had definitely left the solar system a year before – on 25 August 2012 in fact.

Previous to that, though, there had been other reports that no it hadn’t (June 2013), it really, definitely is getting near the edge, but hang on actually not yet (March 2013), we’re not quite sure (June 2011), of course it’s definitely heading for interstellar space (November 2009), it’s already right near the edge (or possibly not) (November 2003).

(more…)

Electricity, eels and evolution

William Turkel’s Spark from the Deep is a fascinating book that explores a little-known aspect of how we came to understand and control electricity: the role played by electrogenic animals such as electric eels and rays. Such animals, he writes, “inspired [us] to colonize an electric world”, and in doing so, they profoundly transformed both our world and ourselves.

The book explores diverse areas of science and history, going well beyond mere descriptions of what happened to provide explanations of how and why biological and cultural evolutionary processes brought us there. A good example concerns Turkel’s discussion of vivisection, which makes useful reading for anyone with occasion to defend animal research today. To observers in the ancient and early modern world, strongly electric fish posed quite a mystery because of their ability to inflict pain at a distance. The mechanisms behind this ability could only be found by “opening the box”, but the act of doing so was (and still is) morally contentious.

Turkel argues that vivisection played an important role in the development of science because “treating humans and other animals as subjects for experiment or disassembly lowered the conceptual barriers between ourselves and our animal kin”. In addition, he writes, vivisection “lowered the barrier between animate and inanimate. If an electric fish could be used as an apparatus, it might also be possible to build an artificial device that could generate a shock like an electric fish”. This is exactly what Alessandro Volta did when he presented his first battery to the Royal Society in 1800, calling it his “organe électrique artificiel”. What better example of the value of pure versus applied research than the fact that our ubiquitous battery was invented to mimic a part of a fish?

Turkel also points out that many modern medical devices “would not exist if it had not been for the variety of grisly experiments” undertaken in the early days of electricity research. In 1774, for example, a child who arrived “dead” at a hospital was resuscitated by electric shock. But for Turkel, these episodes are more than just interesting anecdotes from the history of medicine and technology; they also contain information about what characterizes us as a species. “We humans,” he writes, “are unique in our willingness to treat just about anything as apparatus, including ourselves, one another, human body parts, other animals, animal body parts, inanimate objects, and hybrids of some or all of the above.”

The reductionist practices of disassembling animals into functional components naturally led to questions about how they had been initially assembled. For Charles Darwin, electric fish were a “special difficulty” that became chapter 6 of his book On the Origin of Species. In it, he wrote: “It is impossible to conceive by what steps these wondrous [electric] organs have been produced…I have to make, in my mind, the violent assumption that some ancient fish was slightly electrical.”

Darwin’s “violent assumption” was, in fact, a wonderful example of the dynamic evolution of scientific theories. Nearly 100 years later, scientists confirmed his prediction, eventually discovering hundreds of species of weakly electric fish and multiple indisputable evolutionary pathways by which such electricity developed. Darwin actually predicted this multiplicity in a general sense, writing that “I am inclined to believe that in nearly the same way as two men have sometimes independently hit on the very same invention, so natural selection…has sometimes modified in nearly the same manner two parts in two organic beings, which owe but little of their structure in common to inheritance from the same ancestor.” What a shame that so many adults today are ignorant of these incredibly powerful (and aesthetically and intellectually beautiful) theories that our ancestors worked so hard to discover and articulate.

Another gem in Turkel’s book is his explanation of Darwin’s reluctance to publish his On the Origin of Species until 1859 – 15 years after a previous work, Vestiges of the Natural History of Creation, “brought evolutionary debate to the mainstream”. In Vestiges, the anonymous author (later revealed to be Robert Chambers, a Scottish journalist and geologist) argued that everything in nature is governed by physical laws, and electricity played a key role in this grand unification scheme. The debate that followed its publication was acrimonious. Turkel argues that Vestiges can be interpreted as proposing “a vision of nature appropriate to the industrial age and the middle classes”. Consider how dramatically machines and pollution were changing people’s relationships with each other and the world. Were these changes an inexorable consequence of natural law? One might see parallels today regarding climate change.

While electromagnetic phenomena were radically transforming industrial society, physiologists began exploring how to measure feeble bioelectricity in more typical animals. In the 1820s, the most sensitive -current-measuring instrument available to scientists was a freshly pithed frog leg. Within a year of Alexander Graham Bell’s 1876 invention of the telephone, Emil du Bois-Reymond and his students were listening to bioelectric signals from muscles and nerves. Their instruments opened up completely new kinds of perception – a prerequisite to manipulating and controlling those newly discovered domains. In more recent times, electric organs have been the source for purified ion-channel proteins and DNA, and weakly electric fish remain some of the best model systems for a more holistic “neuroethological” approach to understanding brain function. Unfortunately, funding for such “exotic” research has virtually dried up, in spite of its history of important discoveries.

Turkel demonstrates throughout his book how evolution is an incredibly powerful key, one that can unlock and explain disparate questions about how and why we came to be who and where we are in this world. His concluding chapter contains a concise summary of chemical evolution that was a prerequisite to biological evolution, and also the pre-prerequisite of cosmological evolution. Evolution does for history what calculus does for mathematics. Our pursuits of pure science starting with electric fish led us on an unpredictable path to the most important and transformative discoveries in human history. But although the path was unpredictable, the fact that humanity would embark on such a journey is deeply encoded in our genes.

  • 2013 Johns Hopkins University Press $34.95hb 304pp

Web life: Lunar Reconnaissance Orbiter Camera

So what is the site about?

Launched in June 2009, NASA’s Lunar Reconnaissance Orbiter has spent the past five-and-a-bit years mapping the surface of our Moon, with the initial goal of identifying safe landing sites for future manned missions. Its camera, the LROC, is one of seven instruments on board the spacecraft; together, these instruments transmit about 155 GB of data back to their Earth-bound controllers every day. This steady stream of data has, naturally, created some challenges for the LROC’s science team. As they stated on the camera’s website, “When you have over a million individual images of the Moon, what do you do with them?” Part of the team’s response has been to develop this website, which contains several impressive visualizations and tools to help scientists (and curious onlookers) explore some of the fascinating and beautiful images in the LROC’s archive.

What can I do on the site?

Amateur lunar enthusiasts will have fun with the site’s Gigapan tool, which makes it possible for casual visitors to explore the LROC Northern Polar Mosaic. This composite image contains 680 gigapixels of valid image data and covers a patch of the Moon that, at 2.54 million km2, is slightly smaller than the combined areas of France, Spain, Germany and Scandinavia – all at a resolution of 2 m per pixel. The result is, according to the site, “likely one of the world’s largest image mosaics in existence, or at least publicly available on the Web”, and one can easily pass a pleasant half-hour simply marvelling at the profusion of craters and other features in it. But the LROC site isn’t just about pretty pictures. There are also links to a range of professional tools, such as the Lunaserv lunar-mapping service, that help both team members and external scientists extract data from the camera’s huge, information-rich archive.

Who is it aimed at?

In addition to the pages for armchair explorers and members of the professional lunar-science community, the site also features areas that cater to teachers and students. The “Learn” section, for example, contains short answers to a handful of commonly asked questions about lunar science, including “Does the Moon have volcanoes?” (Answer: yes, several, but they’re not active anymore) and “What is the largest impact feature on the Moon?” (Answer: the South Pole-Aitken Basin, which is 2500 km in diameter). The “Teach” section is more detailed, with a range of lesson plans, fact sheets and posters tailored towards students of all ages. One of the most complex projects, for example, asks older secondary-school students to design a chamber for growing terrestrial plants on the lunar surface, but there is also a junior version called “Moon Munchies” that should get the creative juices flowing for the youngest scientists.

Anything else?

If you only have a few minutes to spare, check out the “Images” tab, which showcases some of the LROC archive’s “most exciting” shots of the lunar surface. At the time of writing, the top image in the list showed the eroding walls of the Maskelyne B crater – a reminder that the Moon has not always been static in geologic (lunalogic?) terms. Another image shows a track made by the Russian rover Lunokhod 2 as it trundled along the lunar surface in early 1973. At around 42 km, Lunokhod’s journey is still the longest made by any rover on the surface of another celestial body. Thanks to the LROC, we can see where its travels ended: in another image, the 1.6 m-wide defunct rover shows up as an irregularly shaped black blob on the grey lunar surface.

Plasmonic chip diagnoses diabetes

A plasmonic chip that can diagnose type-1 diabetes (T1D) has been unveiled by researchers at Stanford University in the US. The chip is capable of detecting diabetes-related biomarkers such as insulin-specific autoantibodies and could be used in hospitals and doctors’ surgeries as a quick and simple way to detect early-stage T1D.

Diabetes could affect nearly 370 million people worldwide by 2030, according to the World Health Organization. More worrying still, diabetes is now the second most common chronic disease in children. For reasons that are still unclear, the rate of T1D (also known as autoimmune diabetes) in children is increasing by about 3% every year, with a projected increase of a staggering 70% between 2004 and 2020.

Although T1D was once thought of as being exclusively a childhood disease, around a quarter of individuals now contract it as adults. The rate of type-2 diabetes (T2D) (also called metabolic or diet-induced diabetes), normally seen in overweight adults, has also alarmingly escalated in children since the early 1990s, in part because of the global obesity epidemic. Until quite recently, it was fairly simple to distinguish between T1D and T2D because the diseases had occurred in different groups of people. However, this is becoming more and more difficult because the groups are beginning to overlap. The main problem is that existing diagnostic tests are slow and expensive, and it would be better to detect diabetes as early as possible to ensure the best possible treatment.

Higher concentration of autoantibodies

T1D is different from T2D in that patients with the disorder have a much higher concentration of autoantibodies. These are produced by the body and work against one or more pancreatic islet antigens such as insulin, glutamic acid decarboxylase and/or tyrosine phosphatase. Detecting these autoantibodies, and especially those against insulin (which are the first to appear), is therefore a good way to detect T1D. Again, standard tests are not very efficient and even the most widely used technique, radioimmunoassay (RIA) with targeted antigens, is far from ideal because it is slow and relies on toxic radioisotopes.

In an attempt to overcome these problems, the Stanford researchers have developed an autoantibody test that is more reliable, simple and faster than RIA and similar tests. It comprises a microarray of islet antigens arranged on a plasmonic gold (pGOLD) chip. It can be used to diagnose T1D by detecting the interaction of autoantibodies in a small blood sample with insulin, GAD65 and IA-2, and potentially new biomarkers of the disease. It works with just 2 µL of whole human blood (from a finger-prick sample, for example) and results can be obtained in the same day.

Enhancing the fluorescence emission

The team, led by Hongjie Dai, made its pGOLD chip by uniformly coating glass slides with gold nanoparticles that have a surface plasmon resonance in the near-infrared part of the electromagnetic spectrum. Plasmons are collective oscillations of the conduction electrons on the surfaces of the nanoparticles. They allow the nanoparticles to act like tiny antennas, absorbing light at certain resonant frequencies and transferring it efficiently to nearby molecules.

The result can be a large boost in the fluorescence of the molecule, and the researchers have shown that the pGOLD chip is capable of enhancing the fluorescence emission of near-infrared tags of biological molecules by around 100 times. Together with Brian Feldman‘s group, the researchers robotically printed the islet antigens in triplicate spots onto the plasmonic gold slide to create a chip containing a microarray of antigens.

“We tested our device by applying 2 µL of human serum or blood (diluted by 10 or 100 times) to it,” explains Dai. “If the sample contains autoantibodies that match one or more of the islet antigens on the chip, those antibodies bind to the specific antigens, which are then tagged by a secondary antibody with a near-infrared dye to make the islet spots brightly fluoresce.”

Antibody detected at much lower concentrations

The samples came from Feldman’s patients who had new-onset diabetes. They were tested against non-diabetic controls at Stanford University Medical Center.

The antigen spots fluoresce 100 times more brightly thanks to the plasmonic gold substrate, which allows the antibody to be detected at much lower concentrations (down to just 1 femtomolar) than if ordinary gold were to be employed in the microarray platform.

“We believe that our technology will be able to address the current clinical need for improved diabetes diagnostics,” Dai says. “The pGOLD platform is also being commercialized by a new start-up company, Nirmidas Biotech, based in San Francisco, aimed at better detecting proteins for a range of research and diagnostic applications. It might even be able to detect biomarkers for other diseases such as heart disease with ultrahigh sensitivity.”

The researchers describe their plasmonic chip in Nature Medicine.

New medical probe combines sound and electromagnetic induction

The Lorentz force combined with acoustic shear waves could help doctors detect dangerous diseases, say researchers in France. The team has shown that the electromagnetic force could create oscillations in living tissue, producing shear waves that can be detected to reveal the tissue’s elasticity. The technique has shown promise in the laboratory and could now be developed as a clinical technique.

An experienced doctor can determine a lot about the human body by simply pressing on it with their fingers, a process called palpation. Many serious medical conditions such as breast cancer can be diagnosed this way because they cause tissue to be firmer than normal. Some internal organ diseases such as liver fibrosis also cause the tissue to stiffen, but, in general, these organs are inaccessible to manual palpation. While the texture of internal tissue can be probed by medical imaging techniques such as ultrasound, these techniques measure a different quantity from palpation.

Shear propagation

When tissue oscillates, it supports both pressure waves (back-and-forth motion) and shear waves (side-to-side movement). Traditional ultrasound techniques operate in the megahertz range and at these frequencies shear waves propagate just a few microns in tissue. As a result, most ultrasound techniques rely on using pressure waves to determine the compression modulus of the tissue.

However, tissue is mainly water – an incompressible fluid – so its firmness to the touch depends on how easily it moves aside to allow a doctor’s fingers to sink in. This is defined by the shear modulus, which can be calculated from the speed of the shear waves in the tissue. Therefore, measuring the sheer modulus can give doctors a map of the inside of the human body as if they could “touch organs and evaluate their stiffness”, says team member Stephan Catheline of the University of Lyon.

Frequency drop

In the past few years, researchers have developed ways of measuring the shear modulus by using shear waves with a much lower frequency, which propagate further in soft tissue. These waves are created inside the body by firing focused ultrasound through the skin, but this has its drawbacks. The brain, for example, is protected against shock and vibration by both the skull and the thin layer of cerebrospinal fluid lining it, which makes inducing shear waves difficult.

Now Catheline and colleagues have adapted an idea called magneto-acoustical electrical tomography to create the shear waves. This involves passing an alternating electric current through tissue in an applied magnetic field. The resulting electromagnetic Lorentz force induces shear-wave oscillations in the tissue. While other researchers had used a high-frequency alternating current, the team used a frequency of only 10–1000 Hz. Using a synthetic tissue substitute called a phantom, and then a sample of pig liver, the researchers tested out their idea, showing that they could induce low-frequency waves with an electric current and detect them using ultrasound transducers. Their results for the pig liver agreed with accepted values for the shear elasticity of healthy liver tissue.

High electric fields

Before the research can be used in medicine, there are some difficulties to address. First, the researchers needed high electric fields to generate a large enough Lorentz force. They estimate that the electrical current passing through the tissue was 100 times higher than accepted safety limits, albeit only momentarily. However, modern magnetic resonance imaging (MRI) scanners can generate magnetic fields many times higher than the 100 mT available from the permanent magnets in the team’s laboratory: using these, one could generate the same Lorentz force with a lower electric field. Second, the cerebrospinal fluid that prevents ultrasound from getting into the brain would also stop it getting out, so one would need another way to detect cerebral shear waves. Here too, MRI might provide the answer, as it has been used in the clinic to detect tissue oscillations.

Kathy Nightingale, an elastography expert at Duke University in North Carolina, says that “so far, what’s exciting about this research is that it’s the first demonstration that I’m aware of of the generation of shear waves using this Lorentz force approach”. There are clear challenges in liver elastography, her own specialism, on patients with livers further below the skin, such as obese patients, she explains. “If this were to be successful in that population, that could be significant,” she says, but stresses that we will have to “wait and see”.

The research will be published in Physical Review Letters.

How to avoid earthquakes when storing carbon dioxide underground

Strategies for reducing the risks of storing carbon dioxide deep underground can be developed by studying the seismic activity associated with waste-water disposal in the oil and gas industry. That is the conclusion of geophysicist James Verdon, of the University of Bristol in the UK, who looked at case studies of 11 waste-water injection sites in the US. He found that the small earthquakes caused by water disposal occur mostly in rock below where the water is stored and not above it: something that is promising for those wanting to capture carbon dioxide and store it underground.

As society struggles to reduce global emissions of greenhouse gases, a number of scientists believe that carbon capture and storage (CCS) will be needed to halt climate change. CCS involves injecting carbon dioxide into rock formations deep underground. However, there are many concerns over the efficacy of CCS. One worry is that the process could trigger earthquakes that would cause cracks in the trapping rock that would allow the gas to leak out again.

Calculating risk

Because most CCS projects are experimental, there are few data available to calculate the risk of CCS-induced earthquakes. But there are similarities between CCS and waste-water injection, which is used to dispose of waste fluids from oil fields. This inspired Verdon to analyse the spatial distribution of the seismicity measured in the 11 waste-water case studies. He also considered the influence of the local geology and the way in which the fluid was being injected. Verdon found that 99% of seismic events occurred within 20 km of the injection well, and that the majority of events took place below the target reservoir.

“We’re still not completely clear as to why most events occur below the injection level,” says Verdon. “However, it is a good thing because events above the injection level would raise concerns that carbon dioxide was leaking out of the target formations and into overlying layers, or to the surface. Events occurring well below the reservoir pose no such concerns.”

So what makes some wells more prone to seismicity than others? Verdon showed that seismicity can only occur when there is a nearby fault, which is close to critical stress. “This [fluid injection] process is not creating earthquakes from scratch,” he says. “There has to be a fault that is fairly near to triggering, and the injection activity pushes it over the edge.”

Hard rocks prone to failure

Verdon also found that faster fluid injection is more likely to lead to higher pore pressures, which increase the risk of seismicity being triggered. Finally, he noted that injection into harder rocks runs the risk of triggering larger seismic events than injection into softer sediments. “Harder rocks can support higher stresses, which leads to larger events when they fail,” he says.

Verdon calculates that the worst-case scenario would be for CCS injection to trigger an earthquake of around magnitude five. “This is assuming that all the volume change put into the ground is released as a single large event,” he says. “In reality, most of the time, most injection into the ground produces little or no seismicity.”

Based on his findings, Verdon has two main recommendations for reducing the risk of induced seismicity at CCS sites. The first is to carry out thorough site characterization and geological appraisal prior to injection, to map any faults that might be re-activated and calculate the kind of stresses necessary to trigger these faults to move. And the second recommendation is to monitor the area during the injection process.

Mitigation strategy

“If injection begins to trigger a fault it might be manifest in smaller events that can be detected by sensitive monitoring apparatus,” he says. “This information could be used to change the injection programme, perhaps by reducing the injection rate or drilling a new well into a different part of the reservoir.”

Verdon thinks that it is important that the issue of induced seismicity is taken seriously and communicated honestly to the public, but he also thinks that this has to be balanced against the risks associated with climate change.

“There will always be a risk that CCS triggers seismicity and we should do the very best we can to appraise and monitor sites to prevent this happening,” he says. “But the risks of not pushing ahead with CCS far outweigh the risks of doing it.”

The research is described in Environmental Research Letters (ERL).

  • In this video James Verdon describes the process of “fracking”, which also involves injecting water deep underground:

 

Copyright © 2026 by IOP Publishing Ltd and individual contributors