Gravitational waves were predicted by Albert Einstein in 1916 as a consequence of the field equations of his general theory of relativity. These ripples in space–time cause space to stretch in one direction perpendicular to the line of travel while simultaneously compressing it in the other. In this video from our 100 Second Science series, Nergis Mavalvala of the Massachusetts Institute of Technology describes how interferometry is used to search for the effect of gravitational waves here on Earth.
Almost a century after gravitational waves were predicted, the hunt to detect them directly is hotting up. In a feature article in the September issue of Physics World, science writer David Appell explains how a major upgrade to the Laser Interferometry Gravitational-wave Observatory (LIGO) may soon bring this hunt to a successful close.
Members of the Institute of Physics (IOP) can get immediate access to the September issue of Physics World on desktop via MyIOP.org or on any iOS or Android smartphone or tablet via the Physics World app, available from the App Store and Google Play. If you’re not yet in the IOP, you can join as an IOPimember for just £15, €20 or $25 a year to get full digital access to Physics World.
By levitating a tiny, nano-sized diamond using light, physicists in the US and Finland have created a controllable quantum system that has optical, mechanical and spin degrees of freedom. Based on a single “nitrogen vacancy” (NV) defect in the diamond, the system could be used in devices that measure extremely weak forces – or even to create “Schrödinger’s cat states”.
NV defects occur in diamond when two adjacent carbon atoms are replaced by a nitrogen atom and an empty lattice site. One type of NV (NV–) is of great interest to physicists building quantum devices because its spin state (–1, 0 or +1) can be determined very easily using light. Furthermore, NVs are well isolated from their surroundings, which means that their spin states – unlike those of most other solid-state systems – keep their quantum nature for relatively long times.
Multiple vacancies
This is not the first time that nanodiamonds have been levitated – back in 2013 Levi Neukirch, Nick Vamivakas and colleagues at the University of Rochester did so using an optical trap. In their experiment, which was done in air, laser light at another wavelength was used to determine the spin states of NVs in the diamond, which was tens of nanometres across. This was an important first step towards creating a “hybrid quantum system”, but the air meant that the nanodiamond could not be put into its lowest energy mechanical state, while the presence of multiple NVs meant that it could not be used as a single spin state either.
Now, however, the Rochester team has joined forces with Eva von Haartman at Åbo Akademi University to address both problems by levitating the single-NV nanodiamonds in a vacuum and showing that the mechanical motion of the diamond can be tracked using the spin of the NV. The team used irregularly shaped diamond nanoparticles about 40 nm across that were coated with silicon oxide to make them more spherical – which makes optical trapping easier. The particles were then trapped using light from a near-infrared laser focused onto a narrow region in a vacuum chamber at just 1 kPa – about 1% normal atmospheric pressure.
Red light, green light
The team trapped a nanoparticle with just one NV and read out its spin state by shining green laser light on it. Some of this is absorbed by the NV electron before being re-emitted as red light through photoluminescence. This process depends on the spin state of the NV, which makes the technique ideal for measuring NV spin.
When a nanoparticle is held in the optical trap at atmospheric pressure, its motion is random because of collisions with air molecules. However, as the air is pumped out of the chamber, the nanoparticle oscillates with simple harmonic motion at a frequency of about 250 kHz. By monitoring the intensity of the photoluminescent light, the team showed that the nanoparticle was indeed oscillating back and forth across the focal region of the green laser – with the photoluminescence at a maximum when the particle was at the centre of the focal region.
The team was also able to control the spin state of the NV using electron spin resonance (ESR), which involves firing a microwave signal at the nanoparticle and causing a transition between spin states. This transition involves a peak in the absorption of the microwaves at the transition energy, which was also observed by the team. Finally, the team studied the effect of an applied magnetic field on the NV spin. In the absence of an applied magnetic field, the +1 and –1 spin states have the same energy. However, when a magnetic field is applied, one state gains energy while the other loses it. This causes a splitting of the absorption peak into two peaks, which was also seen by the team.
Opposing forces
Having learned how to monitor the motion and spin state of the nanoparticle, the team now wants to find ways of manipulating the quantum properties of the system. This could involve, for example, applying a magnetic field such that an NV in the +1 state would feel a force in one direction, while an NV in the –1 state would feel a force in the opposite direction. Under the right experimental conditions, these opposing forces would put the entire nanoparticle into a quantum superposition of two mechanical states – analogous to the famous Schrödinger’s cat, which is in a superposition of being both dead and alive.
Another possible application for the system is an accelerometer that could detect tiny external forces by their effect on how the nanodiamond oscillates. But before the team can try such experiments, they must tackle the problem of the trapped nanoparticles surviving barely a minute or two in a vacuum before being completely degraded. The team believes that this happens because the particles are heated by the laser light but are unable to get rid of the heat through contact with air. The researchers had thought that the silicon-oxide coating would improve the robustness of the nanoparticles, but this was not the case and more work on this is needed.
The process of losing consciousness under a general anaesthetic could involve a phase transition in the brain. That is the conclusion of scientists in the US, who have developed a new mathematical model of how the brain’s neurons interact with each other. The model shows how a small reduction in information transfer between neurons can bring about a sudden loss of consciousness, and reproduces many of the changes in the brain’s electrical activity observed during anaesthesia.
Anaesthetics are used routinely during medical procedures, so it might come as a surprise that scientists do not fully understand how they cause a patient to lose consciousness. Monitoring a person’s level of consciousness while they are being given an anaesthetic generally relies on measuring that person’s brain waves. Those waves are generated by the many electrical impulses fired between neurons, and create a measurable voltage on the scalp that can be recorded via an electroencephalogram (EEG). The correlation between waves recorded on opposite sides of the head indicates the level of consciousness, and this information is used by anaesthetists to vary the dose of anaesthetic given to the patient.
Universal phenomenon
However, this approach is very much an empirical one, and to understand how anaesthetics bring about unconsciousness, scientists have developed computer models to try and capture the underlying changes in neuronal activity. In 1997 Jamie Sleigh of the University of Auckland and Duncan Galletly of the University of Otago used a fairly simple 2D model to show how loss of awareness correlates with reduced efficiency of neural synapses. “Anaesthesia is universal for all animals with nervous systems,” says Sleigh, “which suggests that it is a very generic universal phenomenon that can be modelled at quite an abstract level.”
The latest work improves on the earlier research by modelling how information is transferred between different layers of neurons. Developed by the physicist Yan Xu and colleagues at the University of Pittsburgh, the model reproduces an electrical signal arriving at a node (either a single neuron or a group of neurons) within the thalamus region of the brain (responsible for sensory input), and then simulates the resulting signals induced in successive layers of the cerebral cortex, which is key to conscious awareness. Nodes are connected to one another across and within the layers, with the probability of information flow between them determined using percolation theory. This theory describes, among other things, how hot water flows through coffee grounds.
Ease of communication
To test their model, Xu and co-workers compared its output with EEG waveforms from patients who had undergone general anaesthesia. Regulating the amount of anaesthetic in the model meant varying the value of a single parameter – p – between 0 and 1, which changed the weighting, or the ease of communication, between connected nodes. Doing so, the researchers found they could indeed reproduce several of the main changes to the waveforms of anaesthetized individuals, including a shift to lower frequencies and higher amplitudes, as well as more synchronization between waves in different areas of the cortex and more power deposited by waves in the front of the brain.
The team also demonstrated the phase shift in sensory perception brought about by a tiny change in p. It did so by modelling how a digital image of a famous picture of Albert Einstein is transmitted through the network. The researchers showed that at p = 0.32 the light intensity recorded at the output nodes leads to a barely discernible head and crop of hair, whereas at p = 0.38 the unmistakable image of Einstein is clear.
“Master parameter”
“What is remarkable is that by changing just one master parameter, you can reproduce the most essential features of the transition from a conscious to an unconscious state,” says Xu, a professor of anaesthesiology who trained as a physicist. He proposes that experimentalists put his group’s model to the test by establishing whether – as the model predicts – unconsciousness affects which neurons are involved in the process of relearning. “We ignore a lot of biological detail but those details probably aren’t critical for consciousness to occur,” he adds. “We try to search for those universal rules that govern the emergence of consciousness.”
Peter McClintock, a physicist at Lancaster University in the UK, praises the “interesting and surprising” results, saying that although the research will be of practical benefit only if a way can be found to measure p, “improved understanding must bring us closer to better measures of depth of anaesthesia”. But he does not believe it will change our fundamental understanding of consciousness much. “I don’t think we are getting very much closer to solving the mind–body problem,” he says, “although we orbit around it ever closer as new knowledge and ideas accumulate.”
Sleigh agrees. The modelling by Xu and colleagues, he says, “clearly reflects anaesthesia–aesthesia transitions, but I am not sure that it solves the ‘hard problem’ of (human) consciousness”.
Fancy a wee dram while you are orbiting the Earth? With the growing interest in space tourism, travellers could soon be enjoying a sip or two of whisky in space. To make such tipples as enjoyable as possible, the Scotch whisky maker Ballantine’s has developed a special “space glass” that works in the free-fall conditions of Earth orbit. The firm is also developing a special blend of whisky to be enjoyed in space.
Created by Ballantine’s master whisky blender Sandy Hyslop and James Parr from the Open Space Agency, the new glass was filled with Scotch and tested in free-fall at the ZARM drop tower in Bremen, Germany. You can find out more about how one’s palate changes in space and the challenges facing the glass designers in the above video. And if you want to know if the glass passed the free-fall test, there is a second video called “Space Glass Project: the microgravity test”.
A new mechanism that keeps a protein complex mechanically stable when stretched has been discovered by researchers in Germany, the US and Israel. The mechanism channels forces along paths perpendicular to the “pulling” axis in the structure and its discovery could lead to a better understand of how proteins and other large biological molecules resist external forces. The work may ultimately see the development of artificial mechano-active systems that might be used as scaffolds for tissue engineering or as components in engineered nanomaterials or protein-inspired machines.
Mechanical forces are fundamentally important in biological systems. Cells sense and respond to mechanical cues in their environment and react, for example, by modulating gene-expression patterns. Forces also play an important role in how cells join together to create larger structures such as tissues and organs. At the molecular level, such behaviour is governed by mechanically active proteins, which can sense and react to external forces by changing their shape and modulating their function in a number of ways.
Extremely stable
The researchers, co-led by Hermann Gaub of Ludwig-Maximilians University in Munich, used experimental and numerical methods to study mechanical properties of a multi-domain cellulosome protein complex. This complex is of great interest to researchers because it is known to be extremely stable when subject to mechanical forces.
Constantin Schoeler and colleagues in Munich used an atomic force microscope (AFM) to pull on a protein complex while monitoring how forces travel through the structure. Meanwhile, Klaus Schulten and Rafael Bernardi of the University of Illinois performed steered molecular dynamics (SMD) simulations using state-of-the-art supercomputers to calculate how forces propagate through cellulosome. After comparing their findings, the teams were able to identify the most probable paths that applied forces take through the molecules.
“Our results show that this mechanically stable complex uses an architecture that exploits simple geometrical and physical concepts from Newtonian mechanics to resist external forces,” say the researchers. “The analytical framework we describe provides a basis for developing a deeper understanding of how various mechano-active proteins function.”
Non-parallel routes
“As far as we know, this is a new concept in the biophysics field,” explains co-team-leader Michael Nash of the Center for Nanoscience at Ludwig-Maximilians University in Munich. “Our results imply specific force-propagation routes non-parallel to the pulling axis make the protein complex more mechanically robust.”
“Based on this work and other work from our group, we have developed a ‘toolbox’ of molecular modules based on cellulosomes that we could now use in a variety of biophysics experiments,” adds Nash. “We are now further investigating the fundamental properties of these remarkable molecules and looking into how we can exploit cellulosome proteins in diverse fields, from biomedicine to bioenergy.”
A better understanding of how proteins respond to external forces could provide important information to scientists who are trying to develop scaffolds for tissue engineering. These are artificial structures that provide a template for living cells to create tissues or organs. Such scaffolds must have very specific mechanical properties to create the desired tissue or organ.
Most of the stars in the universe were born within spiral galaxies like the Milky Way but now find themselves inside “dead” elliptical galaxies, according to new analysis of data from the Hubble and Herschel space telescopes. Astronomers led by Steve Eales of Cardiff University, UK, have shown that 83% of stars in the universe were born in spiral galaxies, but today only about 49% of stars exist in spirals. According to the team, this means that many spiral galaxies have somehow transformed themselves into elliptical ones.
When two spiral galaxies collide, astronomers believe that they will merge into a single elliptical galaxy, which is a giant, amorphous spheroid of stars. The merger process uses up all of the spare star-forming gas in the colliding galaxies, and this means that elliptical galaxies have no gas left with which to form new stars. However, there was no quantifiable evidence that there has been a widespread transformation of spirals into ellipticals, until now.
Galactic energy
The latest research is based on a survey of 10,000 galaxies in the nearby universe selected from the Herschel Astrophysical Terahertz Large Area Survey and the Galaxy and Mass Assembly Survey. It also includes galaxies in the early universe, as seen in Hubble’s Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey project. These data allowed Eales’s team to measure how much energy is coming from stars in disc galaxies and elliptical galaxies, respectively, at different times in the history of the universe.
“By measuring the total amount of energy from a particular patch of sky, we could work out how many stars have formed to generate that energy,” Eales told physicsworld.com. “Then we looked at the fraction of the energy associated with disc galaxies and the fraction associated with elliptical galaxies.”
The further afield one looks, the smaller and more indistinct galaxies become, until it becomes difficult to determine what type of galaxy they are. To address this, Eales’s team applied the Sérsic profile, which measures brightness across a galaxy. Spiral galaxies are distinguished by having different brightness profiles to ellipticals.
Stellar milestone
If Eales’s team is correct, then the universe has now reached a milestone, with more than half of its stars existing in elliptical galaxies and star formation continuing to dwindle across the universe. Indeed, ellipticals comprise cooler, redder, longer-lived stars with little star formation and are often described as being “red and dead”.
Team-member Dave Clements at Imperial College, London, adds: “The star-formation rate is certainly dropping, which means that the amount of energy being produced by stars is also declining.”
However, not everyone agrees with the team’s conclusions. Richard Bower, an astrophysicist at Durham University, UK, is concerned that our understanding of the total energy output of star-forming galaxies in the very distant universe is too uncertain to be able to come up with a firm figure of 83% without numerous assumptions being made.
“I’m not convinced that you can determine those things as accurately as you would need to in order to come to a strong conclusion,” he says.
Not dead yet
Furthermore, Bower’s own work with Durham’s Evolution and Assembly of Galaxies and their Environments project, which models the formation of galaxies across large volumes of the universe, suggests that galaxy evolution need not stop at “red and dead”.
“We see galaxies forming stars, stopping, turning red, but then something will happen to rejuvenate them and they’ll go back to forming stars, possibly at a lower rate,” says Bower. Indeed, in 2012 astronomers used Hubble to identify clouds of cool and potentially star-forming gas around 16 “dead” elliptical galaxies.
Eales agrees that sometimes galaxy evolution can run backwards. “There are a lot of processes that could cause galaxies to move in the opposite direction, but the dominant process still seems to be this disc-to-elliptical transformation,” he says.
Researchers at CERN are renowned for their musical side-projects. Notable examples include the album released by scientists at the ATLAS detector in 2010, and the “Large hadron rap“, which currently has almost 8 million hits on YouTube. And of course don’t forget the pop-star-turned-physicist Brian Cox who had the UK chart-topping hit “Things can only get better” in the 1990s with his band D:Ream.
Following in this musical tradition, a duo of Mexican researchers has invented a “Cosmic Piano” inspired by the technologies used at the ALICE particle detector at the Large Hadron Collider (LHC). The instrument’s inventors Arturo Fernández Téllez and Guillermo Tejeda Muñoz hold positions at CERN and the University of Puebla in Mexico. They hope the device can demonstrate both the science and the art of the work being carried out at particle-physics facilities.
I’ve always wondered what triggered the great ideas. When I was a child, I often imagined that somewhere out there, there was a superior being whispering secrets to the chosen few – to those humans lucky enough to gain some special new insight into the world. Indeed, this view seemed to fit with stories such as Newton’s apple or Archimedes’ “Eureka!” moment, in which a serendipitous event precipitates some sudden epiphany. So notorious are these tales that they’ve become myths, used as a shorthand for those moments when our understanding of the world advances to a new level.
When we tell and retell the fall of that fateful apple, or of Archimedes’ bathtub revelation, it sometimes feels as if we are saying that this is how all scientific discoveries take place. But is it really? The truth is that science is built upon many untold smaller discoveries. But what do these smaller discoveries look like, and what leads each individual scientist to their own revelatory moment?
The nuclear question
These questions re-surfaced for me a few years ago during a conversation with artist Ana Sousa Carvalho. We were talking about scientific curiosity and its consequences. At a certain point in our discussion, she asked me if knowing everything we know now – and if given the chance – I would have helped to build the nuclear bomb. This was, of course, a hypothetical question, and my initial response was, “Yes, just out of curiosity.” My gut feeling was that I’d want to show myself that I could crack the problem.
On reflection, however, I became annoyed with myself for having had such a reaction and so I decided to research the history of the atomic bomb to find out more about the decisions the physicists involved faced at each step. While going over a few books, my attention drifted to the technical details, and I began to think that similar ideas could be used in gravitational physics. Well, this incursion eventually gave rise to new work and a scientific paper on instabilities in black-hole systems. This was also the first time in my role as a scientist that I found myself thinking in a serious manner about the way in which ideas come to life.
At the same time, Ana and I had been talking about the similarities between creativity in art and science. After some back and forth, we hit upon the idea of creating a repository of essays written by scientists describing the genesis of their ideas. It was our hope that these texts would eventually grow into an archive that would, with time, take the shape of an informal history of contemporary science, captured from a personal perspective. We felt that the project could be of interest not only to the scientific community, but to students and the general public as well. We called the project The Birth of an Idea) and set about asking the science community to share stories with us.
Our approach was very simple: to write to colleagues and ask them to write a short essay on the genesis of their ideas. It was important for us to emphasize that we weren’t interested solely in tales about breakthrough discoveries. We were also after the trivial and the ordinary, the small victories and the long struggles. As a physicist, I am well aware that this is the territory where most of us toil. It took a while for the project to get off the ground, but we put our focus on the physics community and so far the response has been amazing – we now have a collection of 64 essays in total, from physicists and astronomers in a diverse range of research fields and from countries the world over. We usually send out five requests a month and right now we have a success rate of roughly 30%. Unquestionably, most of the work lies on the contributing authors, who have been extraordinarily generous with their time and experiences.
Just have a bath
Reading the essays has been a rewarding experience – going over these first-hand accounts is like being granted a behind-the-scenes view of physics. From Roberto Emparan, we learn how the association between black-hole entropy and entanglement entropy was born while he put his son to sleep. Pauline Gagnon reveals in her account “Only women could think of it” the surprising way in which she and a female colleague managed to fix a detector at CERN. In some other great examples, Michele Vallisneri recounts how the work of Richard Feynman helped him to optimize gravitational-wave detection, and Andreas Warburton describes the discovery of the top quark. Two examples are given in full below.
Above all, these essays give the reader the human perspective. Each, in its own way, offers a glimpse into an incredible world that usually remains unseen. For instance, Eric Poisson’s contribution does a beautiful job of rendering the restlessness that comes with the struggle to understand a problem: “That night I walk the streets of downtown Milwaukee in a state of agitation. There is something wrong somewhere, and I have to get to the bottom of it.”
We hear from colleagues that they have their best ideas walking home or in bed or in the shower, and we can’t help but nod in agreement while reading Masaru Shibata when he tells us, with the concision of a haiku: “In my experience, ideas often come to me when I am taking a bath. Thus, I recommend taking a bath every day.” Perhaps there was something to Archimedes’ approach after all.
There are moments of elation, as in this experience described by Shahar Hod: “It was incredible to know that, at that moment, I was the only person in the Milky Way who knew the simple truth about the quantization of the black-hole horizon area: k = 3!” But most often, and perhaps by virtue of what Paolo Pani identifies in his essay as the sadistic nature of science, we watch these scientists as they confront the despair of witnessing their ideas die many deaths before they are finally allowed to get their hands on the reward.
Collective creativity
I had many personal conversations with colleagues who did not contribute because they claim that none of their ideas are original but stem from other ideas already in existence. These people make up a considerable fraction of the community and say that their work relies on conversations with colleagues, meetings and workshops. In fact, even some of the colleagues who did send in their contributions assert that the majority of their work gets done in groups, by talking to other colleagues. I find this fascinating, because it is a testament to the existence of a kind of collective creativity – a spontaneous brainstorming that happens when scientists meet.
This concept contradicts the widespread notion that science advances only in big leaps made by an individual, through new big ideas that materialize out of the blue before instantly crystallizing into their final form. As Nicolas Yunes puts it in his essay: “The image of the lonely genius with her or his ‘Aha!’ moment is an illusion. The birth of an idea is much more of a community activity than we sometimes care to acknowledge.”
Courtesy: Adoplhe Millot, Nouveau Larousse Illustré (1897–1904)
An idea is a neat little thing. Ideas can easily show up, uninvited, and disappear without warning. They are of such a fleeting nature that sometimes they seem to lead a life of their own.
For me, more interesting even than an idea, is the glimpse of an idea. Some ideas are big and clumsy, and what they lack in subtlety they compensate for with persistence. They look at us in the face until we look back. But others are far more elusive; we only feel they should be there. We look at something and think “well, that’s funny” or “this was not supposed to happen”, and frown or make a funny face. For a brief moment, the chaos of the world seems not to enter our ears as loudly as before, and we hear little gears grind in our head. When that happens we are at one end of a thread that, once followed, will take us to the feet of a new idea.
But too often we are too busy to care. Or simply forget, somehow, that an idea is something worth searching for, and only get the big, loud and persistent ones. My advice on the subject is: practise finding new ideas. Next time you feel the funny feeling and hear your gears grinding, focus. Do not let the moment fly away, unnoticed and unused. The thread is before you.
Do not wait for the ideas to come, go after them, and with a club. You will miss the first few, but that doesn’t matter. Experience will render your senses more acute, and practice will sharpen your hunting. Learn to examine your ideas against the background of evidence you have gathered, so that you can tell good from bad, useful from useless. As you do this, you will find not only a way of collecting good ideas, but the person you are will be changed by the quest.
And when you are struggling, remember: the truth will set you free, but first it will piss you off.
Pedro Figueira is a researcher at the Centro de Astrofísica da Universidade do Porto, Portugal
A journey by Djordje Minic
(Courtesy: Shutterstock/Redshinestudio)
I would like to recall the foggy and emotional beginnings of an ongoing journey regarding the foundations of quantum gravity and string theory.
The idea that quantum gravity, in the guise of a novel formulation of string theory, should represent a new framework for physics that goes beyond (and also sheds light on) the current framework based on quantum theory (as well as its puzzling relation to the classical world), presented itself to me in a rather vague form in the fall of 1997 as I was moving from Chicago to State College.
I still remember the initial, almost tactile, sensation of excitement and elation, as well as the feeling of dread, of profound fear at being completely wrong and deluded. These initial contradictory emotions have since then become almost an obsession as well as a concrete research programme.
The most important aspect of the flowering of this initially misty notion is that many friends and collaborators have provided at least partial sanity checks to the original intuition. Perhaps the most exciting and concrete realization of the idea that quantum gravity/string theory is a new framework for physics has been realized in my recent work with two dear friends: Laurent Freidel and Rob Leigh.
It is still not clear where this journey will take us, but it has so far been a wonderful example of Antonio Machado’s “Traveller, there is no path/The path is made by walking.”
Djordje Minic is a professor of physics at Virginia Tech, US
The first “loophole-free” measurement of the violation of Bell’s inequality by a quantum system has been claimed by physicists in the Netherlands, Spain and the UK. Their experiment involves entangling spins in diamonds separated by 1.28 km and then measuring correlations between the spins. The large separation between the diamonds and the relative ease with which the spins can be measured ensures that the experiment is performed properly and its result confirms the existence of the seemingly bizarre concept of quantum-mechanical entanglement.
The idea of entanglement first arose back in 1935, when Albert Einstein, Boris Podolsky and Nathan Rosen pointed out that two quantum particles such as electrons can be in a state in which a measurement on one particle instantaneously affects the other – no matter how far apart they may be. This apparent paradox upset the trio because, in the world of classical physics, it would require information to travel faster than the speed of light. This relationship between particles was later dubbed entanglement and subsequent work showed that entanglement can be determined by looking at correlations between measurements made on the two particles, such as the direction in which the two electrons are spinning. Entangled particles have much stronger correlations than are allowed in classical physics – a property that can be exploited in quantum computers and other quantum technologies.
Upper limit
In 1964 the Northern Irish physicist John Bell famously calculated an upper limit on how strong these correlations could be if they were caused by classical physics alone – what has become known as Bell’s inequality. Correlations stronger than this limit, Bell reasoned, could occur only if the particles were entangled. Experiments using photons, ions and other entangled particles have confirmed that Bell’s inequality is indeed violated. However, these experiments are plagued by one or more loopholes that allow unforeseen effects of classical physics to cause the violation.
In this latest work, Ronald Hanson and colleagues at the Delft University of Technology, along with researchers at the Institute of Photonic Sciences in Barcelona and the diamond-maker Element Six in Oxford, have eliminated what they consider to be the two most significant loopholes that can arise in Bell-violation experiments. Crucially, they have done so simultaneously in one experiment, which had not been done before.
Channels unknown
One is the “locality” loophole, whereby information about the measurements is exchanged between detectors via unknown classical communication channels – thereby increasing the apparent correlation between the particles. Because this communication is classical, it cannot be transmitted faster than the speed of light and therefore this loophole can be closed by increasing the separation distance between the particle detectors and/or reducing the time it takes to make the measurement so that communication is impossible.
The second is the “detection” loophole, whereby an experimentalist is fooled into thinking a large correlation exists because an unknown aspect of the experiment causes it to favour the detection of particles with large correlations over those with small correlations.
Best of both particles
The locality loophole is easily eliminated by using photons as quantum particles, because photons are able to travel many kilometres without being scattered or absorbed. However, it is very difficult to detect each and every photon in such an experiment, which leaves it open to the detection loophole. Conversely, experiments involving electrons suffer from locality problems because they cannot be done over large distances. However, electron experiments can beat the detection loophole because electrons can be more reliably detected. What Hanson and colleagues have done is to use both photons and electrons in their experiment.
Their set-up consists of two diamonds separated by 1.28 km. Each diamond has a single nitrogen vacancy (NV) centre, which is essentially an electron spin. The measurement process begins with each NV centre emitting a photon that is entangled with its parent NV electron. Both photons travel to a third location that is hundreds of metres away from both diamonds. There, the photons are detected and when this measurement occurs, the NV electrons become entangled in a process called “entanglement swapping”. The next step is to quickly measure the spin states of the two electrons, which is done using a very efficient fluorescence technique.
The team ran 245 trials of the Bell test over a total measurement time of 220 h and found a very strong violation of Bell’s inequality. Furthermore, the team calculates that the large separation between the two diamonds and the rapid readout time of the spins closes the locality loophole, while the high efficiency of the spin readout technique closes the detection loophole.
No more freedom of choice
While the team points out that no Bell experiment can be free of every conceivable loophole, the researchers say that their experiment places the strongest restrictions to date on classical theories of quantum entanglement. They also say that their experiment could be modified to close more exotic loopholes such as “freedom of choice”, whereby unbeknown to the experimentalist the design of the experiment is somehow limited in a way that boosts the measured correlations.
The experiment is described in a preprint on arXiv. Update: The research was published in the journal Nature on 21 October 2015.
A new technique that accelerates positrons much more efficiently than conventional particle accelerators has been unveiled at the SLAC National Accelerator Laboratory in the US. The technology has the potential to make future positron accelerators more powerful yet more compact, and could also be used to boost the maximum collision energy of existing electron/positron colliders.
Some particle physicists believe that the next big facility after the Large Hadron Collider (LHC) should be a high-energy lepton collider that smashes electrons and positrons (antielectrons) together. Such a machine would produce cleaner, easier-to-interpret collisions than a hadron collider, and would create a far greater proportion of new particles per collision. To reach a high enough energy, such an electron/positron collider would have to run in a very long, straight line. This is because conventional accelerator technology using radiofrequency electromagnetic field cavities has a maximum energy gradient of about 100 MeV/m. The proposed 0.5 TeV International Linear Collider (ILC), for example, is expected to be about 30 km long, including two 11 km accelerator sections. For this reason, accelerator physicists are trying to develop new ways to accelerate electrons and positrons so the particles reach higher energies over shorter distances.
One such method is “plasma wakefield acceleration”, which was first demonstrated in 2007 and involves firing bunches of electrons into a plasma. An initial “drive” bunch repels the free electrons in the plasma, and this creates a charge-density wave. A second, trailing bunch of electrons “surfs” this wave and gains energy very rapidly. In 2014 Sebastien Corde and researchers at the SLAC National Accelerator Laboratory in California, and international colleagues, accelerated electrons through a gradient of 4.4 GeV/m using this method. Unfortunately, this technique cannot be applied directly to the acceleration of positrons because there is no practical way of creating an “anti-plasma” containing free positrons.
One bunch, not two
In this latest research, the same team has modified its technique to allow positrons to be accelerated. The process begins with a single bunch of positrons from a conventional accelerator that is injected into a lithium plasma. Under the right conditions, the positron bunch will interact with the plasma, causing the front portion of the bunch to behave like the drive bunch in an electron accelerator. The front portion of the bunch will slow as it feeds energy into the plasma electrons. Meanwhile, the back end of the same bunch plays the role of the trailing bunch, drawing energy back out of the plasma and being accelerated.
“The overall energy of the bunch is obviously not going to be increased, because energy must be conserved,” explains Corde. “We are just transferring energy from the front to the tail. What’s important for particle colliders is that each particle has a very large energy.” The team has dubbed the technique “self-loaded plasma wakefield acceleration”. The team boosted the energy of the positrons at the end of the bunch by 5 GeV/m.
Multi-stage acceleration
Today, the researchers believe the plasma-wakefield technique could double the energy of particles in a conventional accelerator, allowing particles in the ILC to reach 1 TeV before collision. Further optimization may boost this multiplication factor, perhaps as high as five. Ultimately, it may be possible to construct a multi-stage plasma wakefield accelerator, in which the same bunches of particles could be accelerated multiple times. However, as only a fraction of the positrons are accelerated in each stage, simply separating out the accelerated positrons over and over again would rapidly produce a very small bunch. The researchers therefore aim to separate the accelerated trailing bunch from one stage and manually load it into the back of a fresh bunch of positrons. “We have a good idea that it could work but it’s also a technically challenging experiment,” says Corde. “We will work on that.”
“The electron was relatively easy to accelerate, but the positron was actually a big deal,” says beam physicist Philippe Piot of Northern Illinois University in the US, who says the research is “the first experimental proof” that plasma wakefield acceleration can accelerate positrons. He says that more research is needed into the scattering of both electrons and positrons during this type of acceleration before wakefield acceleration can be applied in any accelerator. “There are still a lot of issues, but given the progress that has been made over the last decade in this type of acceleration, I personally would be optimistic,” he says.
The acceleration technique is described in Nature.