A new type of super-resolution microscopy developed by researchers in Germany, Argentina and Sweden combines the merits of two Nobel-prize-winning techniques, attaining nanometre-scale resolution more quickly and with fewer emitted photons than previously possible.
The resolution limit of traditional optical microscopy is set by the Rayleigh criterion: if two features are separated in space by less than half a wavelength, diffraction will blur the light too much for the features to be distinguished. Super-resolution microscopy techniques surpass this limit by selectively exciting individual fluorescent groups (fluorophores) on molecules while their neighbours remain dark.
Single-molecule microscopy techniques, for which Eric Betzig and William Moerner each shared a third of the 2014 Nobel Prize for Chemistry, switch fluorophores on and off randomly and use the arrival positions of the emitted photons at a camera to reconstruct the location of each fluorophore. However, the photons are diffracted as they escape, leaving some uncertainty about where a single photon originated. Detecting many photons from the same fluorophore can reduce this uncertainty, but to get to single-molecule (nanometre) resolution requires tens or even hundreds of thousands of photons, and most fluorophores degrade (or bleach) before they emit this much light.
Doughnuts of light
Stimulated emission depletion (STED) microscopy, which won Stefan Hell the remaining third of the Nobel prize, uses two beams of light. The first illuminates the sample with a focused beam, turning fluorophores on. The second beam is focused to a doughnut shape in the sample and suppresses the fluorescence everywhere in the focal region – except at the doughnut’s central hole. By scanning the beams jointly over the sample, the spatial distribution of all the fluorophores can be determined. Unfortunately, to suppress fluorescence perfectly except in one single-molecule-sized spot would require a doughnut-shaped beam powerful enough to destroy the sample. Both techniques, therefore, struggle to obtain molecular-scale resolution.
Microscope images of a square array of fluorophores taken using MINFLUX (left) and a conventional super-resolution microscope (right). (Courtesy: Klaus Gwosch / Max Planck Institute for Biophysical Chemistry)
Now, Hell and colleagues at the Max Planck Institute for Biophysical Chemistry in Göttingen and several other institutes have developed a new technique called maximally informative luminescence excitation probing (MINFLUX). They switch individual molecules on as in single-molecule microscopy, but for determining the molecules’ position, they scan a single, doughnut-shaped beam across the sample, as in STED microscopy.
When their photodetector records a signal, the researchers know that a fluorophore is nearby. As the beam is doughnut-shaped, with zero intensity at its centre and increasing intensity further away, the intensity of the fluorescence can reveal the intensity of the light incident on the fluorophore and, therefore, how far the fluorophore is from the focus of the beam.
Beam structure
By aiming the beam at four points near the fluorophore and recording the fluorescence intensity each time, the researchers can easily work out how far the fluorophore is from each point and deduce its location with nanometre-scale precision. “In single-molecule microscopy, the co-ordinate system is given by the pixels of the camera,” explains Hell, “Here, it’s the structure of the beam that defines position.”
It’s a very clever idea and it’s remarkably simple
Adam Cohen, Harvard University
The researchers used MINFLUX to map the positions of fluorophores tagged onto specially shaped sequences of DNA called DNA origamis, and thus to calculate their shapes. In the first experiment, using fluorophores separated by 11 nm, the researchers needed only 50 s of imaging time to locate all those that emitted at least 500 photons in total with an average spatial uncertainty of 2.1 nm. Then, using fluorophores only 6 nm apart, they identified those emitting more than 1000 photons to within around 1.2 nm in 120 s.
Twice as good
These precisions are both twice as good as the maximum precisions theoretically achievable using single-molecule microscopy with the same numbers of photons. The researchers also tracked single molecules inside living bacteria – something only possible because of the faster speed of MINFLUX.
“MINFLUX is a truly remarkable breakthrough with a highly innovative method design,” says Xiaowei Zhuang of Harvard University in the US – one of the inventors of single-molecule super-resolution microscopy. “It’s amazing ability to localize molecules at an ultra-high precision with such a low photon number will greatly benefit our understanding of molecular interactions inside cells.”
Adam Cohen, also at Harvard, agrees: “It’s a very clever idea and it’s remarkably simple,” he says. “People have been trying to track molecules for a variety of different applications for the last 15 or so years, and this relatively simple concept will improve that quite a bit, at least under some circumstances.” He sees a potential issue with the use of the technique in more complex cellular structures, however: “If you start to get stray photons coming not from your molecule but from other sources, those stray photons will throw off the precision of the tracking,” he explains. “As you get to having more and more fluorophores in your sample, it becomes increasingly difficult to ensure that all but one of them will be off.”
There are some strange uses for a spent tea bag after it’s made your cuppa, but work published in Scientific Reports really takes the biscuit. A group of researchers in Korea have managed to demonstrate an enhanced carbon anode structure from waste tea leaves that could be a path to cheap, high-capacity lithium ion batteries.
Lithium ion batteries power everything from mobile phones to electric cars. They normally consist of a lithium cathode and a graphite anode in an organic solvent. The graphite anode is cheap, stable and has a suitable electrochemical potential, but it poses a fundamental limit to battery capacity in terms of how many lithium ions it can absorb. If lithium ion batteries are to meet the growing demands for an energy storage solution that can power efficient vehicles and allow renewable energy to displace fossil fuels, then a better anode material must be found.
Synthesis process of the two types of carbon synthesised from tea leaves. Credit: Scientific Reports
Some of this work towards better anode materials focusses on developing a porous carbon anode that can maintain the chemical advantages of graphite while benefiting from the higher surface area of the porous nanostructure to improve on its capacity limitation. These methods are held back from mass production by the need for high-quality carbon and complex manufacturing processes. However a recent study led by Dong-Wan Kim at Korea University demonstrates how an appropriate porous carbon structure can be made cheaply through a simple series of steps applied to waste tea leaves.
The added acid advantage
Kim and colleagues at Korea University and Korea Institute of Science and Technology, compare two methods. In both, the tea is washed, dried, crushed and carbonized, but one also includes an acid treatment step with hydrochloric acid. They found that the acid treatment step improved the material in a number of ways.
The acidification process seemed to remove unwanted impurities. Traces of several metals amounting to 1% by weight in the untreated sample were not found at all in the acidified sample. What’s more the structure of the pores was different. The unacidified sample had low porosity with relatively large pore size (average 5.69 nm). The acidified carbon had a hierarchically porous structure with a mixture of larger and smaller pores, higher porosity and an average pore size of 2.48 nm. This had the effect of increasing the surface area by almost a factor of 100.
Scanning electron microscopy (SEM) Images of the carbon structures. Credit: Scientific Reports
To compare the electrical performance of the two prospective anode materials, Changhoon Choi – the Korea University researcher who performed and analysed the experiments – used them to make lithium ion coin cells. The acid-treated material showed capacities of 479 mAhg-1, higher than the upper limit of 372 mAhg-1 for graphite anodes and the capacity of 270 mAhg-1 for the sample with no acid treatment. After 200 charging and discharging cycles both types of cell showed stable efficiency of charge transfer above 98.5 %, indicating that the reversibility of this electrode is very good and highlighting what good use can be made of an abundant resource – waste tea leaves.
Working in space: A still image from the NASA spacewalk video. (Courtesy: NASA)
By Hamish Johnston
NASA is live streaming a video of a spacewalk on its Facebook page, and you just might be able to catch it live from the International Space Station – or watch it again. The video shows astronauts Shane Kimbrough and Peggy Whitson upgrading the space station’s power system – and it looks like hard work to me.
A new metamaterial in which the Hall-effect response of electrons to a magnetic field can be controlled and even inverted has been created by Christian Kern, Muamer Kadic and Martin Wegener at the Karlsruhe Institute of Technology, Germany. The Hall effect involves the deflection of electrons as they travel through a conductor in an applied magnetic field. The deflection creates a voltage across the width of the conductor. Very sensitive measurements of magnetic-field strength can be made by measuring this voltage, which can be expressed in terms of the Hall coefficient. This is normally an intrinsic property of a material that depends on whether the charge carriers are electrons (resulting in a negative Hall coefficient) or holes (positive coefficient). The new metamaterial has a chainmail-like structure of interlocking micron-sized rings of hollow semiconductor. By changing how densely packed the structure is, Kern and colleagues are able to fine-tune the Hall coefficient and even reverse its sign. The research is described in Physical Review Letters and could lead to the development of specialized magnetic-field detectors.
XENON100 excludes DAMA dark-matter detection at 5.7σ
The upper photomultiplier tube (PMT) array of XENON100 contains 98 PMTs. (Courtesy: CC BY-SA 4.0/Jpienaar130)
For nearly 20 years, successive DAMA experiments deep underground at the Gran Sasso National Laboratory in Italy have reported a strong annual oscillation in the signal from their dark-matter detectors. The statistical significance of the oscillation is now a whopping 9.3σ – well beyond the 5σ that signifies a discovery. Some physicists believe that this is the first direct detection of dark matter. However, others disagree because apart from the CoGeNT detector in the US, which has seen a similar but smaller effect, no other dark-matter searches across the globe have detected an oscillation. Now, physicists working on the XENON100 dark-matter experiment at Gran Sasso say that they have also failed to see the annual oscillation after a run of four years. Writing in a preprint on arXiv, the team claims that its study suggests that the DAMA oscillation is not the result of dark-matter interactions at a statistical significance of 5.7σ. So now physicists are faced with the astonishing situation that one experiment (XENON100) has “discovered” that the discovery of another experiment has not occurred.
NASA selects asteroid missions
Artist’s impression of the Lucy mission to Jupiter’s Trojan asteroids. (Courtesy: NASA)
NASA has chosen its next two missions as part of the space-agency’s Discovery Progam. Psyche and Lucy were selected from five proposals and will each cost approximately $450m to develop. Psyche will help scientists understand how planets and other bodies separated into layers – including cores, mantles and crusts – by studying the asteroid 16 Psyche, which is around 200 km in diameter and consists of almost pure nickel–iron metal. Psyche is targeted to launch in October 2023, arriving at the asteroid in 2030, following a Mars fly-by in 2025. Lucy, meanwhile, will visit Jupiter’s Trojan asteroids and study the origins of giant planets by looking at the fragments left over from their formation. The probe is expected to launch in 2021 and arrive at its target by 2027. Created in 1992, NASA’s Discovery Program funds missions to explore the solar system with focused scientific goals. In the past it has funded 12 probes, including the MESSENGER mission to Mercury, the Kepler exoplanet hunter, and the InSight craft that is scheduled to launch for Mars in 2018.
You can find all our daily Flash Physics posts in the website’s news section, as well as on Twitter and Facebook using #FlashPhysics.
Late last year, I visited the fast-growing physics powerhouse of Beijing, China. Along the way I took snapshots of the people, events and labs I visited – a selection of which I’ve put together here to share my highlights.
During a break from the serious business of science journalism, I visited Beihei Park, a 1000-year-old former imperial park close to the Forbidden City in central Beijing. While looking up at this ornate pavilion ceiling, I couldn’t help being reminded of the ATLAS detector at CERN.
A new method of fabricating nanoscale optical crystals capable of converting infrared to visible light has been developed by researchers in Australia, China and Italy. The new technique allows the crystals to be placed onto glass and could lead to improvements in holographic imaging – and even the development of improved night-vision goggles.
Second-harmonic generation, or frequency doubling, is an optical process whereby two photons with the same frequency are combined within a nonlinear material to form a single photon with twice the frequency (and half the wavelength) of the original photons. The process is commonly used by the laser industry, in which green 532 nm laser light is produced from a 1064 nm infrared source. Recent developments in nanotechnology have opened up the potential for efficient frequency doubling using nanoscale crystals – potentially enabling a variety of novel applications.
Materials with second-order nonlinear susceptibilities – such as gallium arsenide (GaAs) and aluminium gallium arsenide (AlGaAs) – are of particular interest for these applications because their low-order nonlinearity makes them efficient at conversion.
Substrate mismatch
To be able to exploit second-harmonic generation in a practical device, these nanostructures must be fabricated on a substrate with a relatively low refractive index (such as glass), so that light may pass through the optical device. This is challenging, however, because the growth of GaAs-based crystals in a thin film – and type III-V semiconductors in general – requires a crystalline substrate.
“This is why growing a layer of AlGaAs on top of a low-refractive-index substrate, like glass, leads to unmatched lattice parameters, which causes crystalline defects,” explains Dragomir Neshev, a physicist at the Australian National University (ANU). These defects, he adds, result in unwanted changes in the electronic, mechanical, optical and thermal properties of the films.
The nanocrystals are so small they could be fitted as an ultrathin film to normal eye glasses to enable night vision
Dragomir Neshev, ANU
Previous attempts to overcome this issue have led to poor results. One approach, for example, relies on placing a buffer layer under the AlGaAs films, which is then oxidized. However, these buffer layers tend to have higher refractive indices than regular glass substrates. Alternatively, AlGaAs films can be transferred to a glass surface prior to the fabrication of the nanostructures. In this case the result is poor-quality nanocrystals.
Best of both
The new study was done by Neshev and colleagues at ANU, Nankai University and the University of Brescia, who combined the advantages of the two different approaches to develop a new fabrication method. First, high-quality disc-shaped nanocrystals about 500 nm in diameter are fabricated using electron-beam lithography on a GaAs wafer, with a layer of AlAs acting as a buffer between the two. The buffer is then dissolved, and the discs are coated in a transparent layer of benzocyclobutene. This can then be attached to the glass substrate, and the GaAs wafer peeled off with minimal damage to the nanostructures.
The development could have various applications. “The nanocrystals are so small they could be fitted as an ultrathin film to normal eye glasses to enable night vision,” says Neshev, explaining that, by combining frequency doubling with other nonlinear interactions, the film might be used to convert invisible, infrared light to the visible spectrum.
If they could be made, such modified glasses would be an improvement on conventional night-vision binoculars, which tend to be large and cumbersome. To this end, the team is working to scale up the size of the nanocrystal films to cover the area of typical spectacle lenses, and expects to have a prototype device completed within the next five years.
Security holograms
Alongside frequency doubling, the team was also able to tune the nanodiscs to control the direction and polarization of the emitted light, which makes the film more efficient. “Next, maybe we can even engineer the light and make complex shapes such as nonlinear holograms for security markers,” says Neshev, adding: “Engineering of the exact polarization of the emission is also important for other applications such as microscopy, which allows light to be focused to a smaller volume.”
“Vector beams with spatially arranged polarization distributions have attracted great interest for their applications in a variety of technical areas,” says Qiwen Zhan, an engineer at the University of Dayton in Ohio, who was not involved in this study. The novel fabrication technique, he adds, “opens a new avenue for generating vector fields at different frequencies through nonlinear optical processes”.
With their initial study complete, Neshev and colleagues are now looking to refine their nanoantennas, both to increase the efficiency of the wavelength conversion process but also to extend the effects to other nonlinear interactions such as down-conversion.
The research is described in the journal Nano Letters.
In 1990 the distinguished theoretical physicist John Wheeler coined the phrase “it from bit” to encapsulate a radical new view of the universe that he had been developing over the preceding 20 years:
“It from bit symbolizes the idea that every item of the physical world has at bottom…an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-or-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin.”
In other words, what Wheeler proposed is that at the most fundamental level, all of physics has a description that can be articulated in terms of information. While Wheeler’s scientific career ran from early work with Niels Bohr on nuclear fission in the 1930s to quantum electrodynamics, general relativity and the foundations of quantum mechanics, this radical idea received little support at the time. However, in hindsight, we can now see that it was truly visionary.
Fast-forward a quarter of a century and a modernized version of Wheeler’s idea is now taking shape. Quantum-information science, which aims to develop new ultrafast computers based on the principles of quantum theory, is forming an exciting confluence with high-energy theory, which studies the elementary subatomic particles and the fundamental forces in nature.
One twist is that in the time since Wheeler’s original work, our understanding of information in quantum mechanics has advanced tremendously. While Wheeler emphasized bits, it appears that intrinsically quantum-mechanical forms of information – now known as “qubits” – are more fundamental. In recent years a growing number of theorists have been exploring whether these curious quanta of information may hold the answer to combining quantum theory and general relativity into a quantum theory of gravity.
Language problem
Despite this exciting convergence, high-energy physics and quantum information theory remain distinct disciplines and communities. Both are mature fields that have grown and developed to study their own problems. A primary challenge to interdisciplinary co-operation, though, is that as knowledge builds, so too does the language that encodes that knowledge. Specialization creates communities with their own dialects and tools. So when a physicist steps from their area of expertise and into another, they can easily get lost in a thicket of unfamiliar terms and descriptions, even before grappling with the new physical principles and phenomena.
In August 2015, with the support of the New York City-based Simons Foundation, the “It from Qubit” collaboration was formed to build new bridges promoting communication and collaboration between the two research communities. We chose the name both as a homage to Wheeler and, by replacing “bit” with “qubit”, to emphasize the crucial role of entirely new ideas and techniques that would surely have surprised and delighted him.
Consisting of 17 senior researchers from the US, Canada, the UK, Japan, Israel and Argentina, plus a growing team of postdoctoral fellows, the collaboration is trying to answer an ambitious list of questions. Is space–time held together by quantum entanglement? Does quantum gravity allow information processing even more powerful than quantum computers? Is there a connection between computational complexity and the principle of least action? The list goes on, but the goal of the collaboration is not just to make progress on these specific problems. Perhaps even more importantly, it aims to motivate, attract and train a broader community of scientists to work at the interface of quantum information and high-energy physics.
Scanning new horizons
A first indication of the new information-based perspective that is a key focus of the It from Qubit collaboration is found by rewinding all the way back to 1972. At that time, Wheeler’s graduate student Jacob Bekenstein used various thought experiments to argue that in a black hole, the area A of the event horizon – the surface of “no return” dividing the interior and the exterior – should be equal to its entropy, S, a quantity whose usual interpretation is purely statistical. This suggestion hinted at some unseen microscopic structure for black holes and hence ran contrary to the common wisdom of the day that black holes were simply elegant classical geometries solving the equations of Albert Einstein’s general relativity. Bekenstein’s idea originally met with strong opposition, but it was vindicated just a few years later when Stephen Hawking showed that the laws of quantum mechanics require black holes to radiate much like black bodies at a finite temperature, and indeed possess entropy just like ordinary thermal systems. The Bekenstein–Hawking formula S = A/4G (where G is the gravitational constant) is now widely regarded as one of the most remarkable discoveries in fundamental physics. With hindsight, we can also see that this elegant formula was the first hint of a connection between information and the structure of space–time, as it encodes information about the statistical-mechanical microstates comprising the black hole in the geometry of the space–time itself.
Common tongue At the It from Qubit summer school at the Perimeter Institute for Theoretical Physics, participants strove to learn the basics of each other’s fields of expertise. (Courtesy: JB Park)
One big puzzle, though, was an apparent mismatch. In standard thermal systems, entropy is proportional to volume but a black hole’s entropy is only proportional to its area. This is one feature that made the Bekenstein–Hawking entropy so unusual. Moreover, when gravity is taken into account, this strange proportionality of entropy with area infects ordinary matter as well. That is, trying to put too much matter and, hence, entropy into a given volume leads to gravitational collapse and the creation of a black hole, whose entropy is, once again, only proportional to its area. As a result, the most entropy, and hence information, that can ever be packed into a given region of space is proportional to the region’s area, not its volume.
In 1984 Rafael Sorkin made headway on this puzzle when studying quantum correlations in quantum field theory. He found that entropy provided a measure of the correlations between degrees of freedom in different regions and that the largest contribution was, in fact, proportional to the area of the boundary separating the two regions, a result very reminiscent of the Bekenstein–Hawking entropy. With modern developments, we can recognize Sorkin’s calculation as evaluating a quantity known in the quantum-information community as “entanglement entropy”. This concept has become central to the discussion of the quantum physics of black holes.
Holographic harbinger
The Bekenstein–Hawking formula motivated other pioneers, such as Gerard ‘t Hooft and Leonard Susskind, to begin advocating for a “holographic” formulation of quantum gravity. As with regular optical holograms, this is the idea that the information in a 3D volume can be encoded on a 2D surface.
In 1997 Juan Maldacena produced a realization of this holographic principle when he found a relationship between two physical theories: quantum gravity in a peculiar kind of space–time called anti-de Sitter space (AdS); and a special kind of quantum field theory called conformal field theory (CFT), in one fewer spatial dimensions. In fact, Maldacena’s “AdS/CFT correspondence” postulates that these two theories provide two different descriptions of the same physical phenomena. AdS is a peculiar space–time geometry where it is possible to stand in the middle and shine a light at the “boundary”. Even though the boundary is infinitely far away, the light beam is reflected and returns in finite time. In the AdS/CFT correspondence, the CFT can be thought of as being defined on the boundary, while the quantum-gravity theory lives on the inside, which is usually called the “bulk”. As bizarre as the correspondence may seem at first sight, it is an idea that has survived the scrutiny of thousands of theoretical physicists over almost 20 years. In recent years, this holographic duality has become the central arena for investigations into the new convergence of high-energy physics and quantum information.
Intriguingly, a generalized version of the Bekenstein– Hawking formula reappeared in a 2006 collaboration of condensed-matter theorist Shinsei Ryu and string theorist Tadashi Takayanagi. They proposed that calculating the entanglement entropy in the boundary CFT could be translated into the gravitational question of evaluating A/4G on certain special surfaces in the bulk AdS space–time. For a given region in the boundary theory, their prescription was essentially to imagine letting gravity pull the region down into the bulk geometry while keeping its edges pinned to the boundary region. The resulting surface should minimize its area in the same way a soap bubble does when pinned within a wire frame. Inserting the area of the resulting bubble into the formula A/4G then yields the entanglement entropy of the region in the boundary CFT. At the time, this provocative idea had appeared like a rabbit pulled from a magician’s hat. Over time, however, Ryu and Takayanagi’s geometric prescription passed increasingly stringent tests of its quantum-information-theoretic properties. Now we can even derive their formula from a careful translation of calculations in the boundary theory to the bulk, and a variety of new insights have emerged from carefully studying this remarkable result.
In particular, the Ryu–Takayanagi formula motivated Mark van Raamsdonk, Brian Swingle and others to start developing the idea that entanglement is key to the emergence of space–time itself. In quantum mechanics, entanglement between different particles joins them into a whole that is fundamentally more than the sum of its parts. Van Raamsdonk and Swingle speculated that the enormous amount of entanglement present in the boundary CFT was effectively stitching together the microscopic degrees of freedom of the bulk quantum-gravity theory to produce the AdS space–time geometry, something very different indeed from the sum of the boundary parts. Those initially vague speculations have quickly given way to more precise statements. Recently, van Raamsdonk and his collaborators have even managed to show that the field equations of general relativity in the bulk emerge from the structure of entanglement in the boundary theory.
While entanglement entropy remains at the forefront of the studies of quantum information and quantum gravity, a growing list of other concepts including Rényi entropy, relative entropy, quantum error correction and circuit complexity are each finding a place in this discussion.
Meeting of minds
In July last year, the Perimeter Institute for Theoretical Physics (PI) in Canada hosted the It from Qubit collaboration’s opening gambit in trying to broach some of the divides between fields. As one of our first tasks was to improve our own fluency in both languages, we decided to organize this meeting as a combination of both a workshop and a summer school.
Interest in the summer school far exceeded our expectations, to put it mildly. After doubling the originally planned enrolment and filling PI to the gills, we still had to turn away more than 200 applicants. In the end, 180 researchers from around the world converged on PI. Many of those students and postdocs who couldn’t attend physically due to space limitations did so virtually through live webcasts of the lectures and seminars.
Participants represented a wide array of fields, including quantum gravity, particle theory, condensed-matter physics, foundations of quantum theory, quantum information and computer science. They also represented a broad range of experience, from graduate students to senior professors. But everyone had a common denominator: they were there to learn.
To accommodate the wide range in backgrounds of the students and experts alike, we knew from the start that this meeting needed to be rather different from a standard conference or summer school. After lengthy discussions, we came up with a programme that offered a broad range of activities, from introductory lectures to cutting-edge research seminars. There were often two or three events running in parallel, so it was up to each “student” to decide how to participate at their own level and make the most of the meeting. In problem-solving sessions, junior graduate students could find themselves working alongside world-leading researchers, both diving into unfamiliar waters. Animated conversations involving both senior researchers and students ran the gamut from hashing out basic concepts from the morning’s lecture over lunch, to chalkboard discussions of people’s latest research ideas in front of PI’s reflecting pool.
There was a remarkable enthusiasm and energy animating the entire two weeks of the meeting. For us as organizers, the two weeks passed in something of a blur, but certain moments stand out: getting to know every nook and cranny of PI (and its temporary occupants) in a cheerfully nerdy scavenger hunt, watching our colleagues’ beautiful introductory lectures, which finally put to rest some of our most egregious confusions about each other’s fields, and some exhilarating seminars, such as the one by postdoc Daniel Harlow on AdS/CFT as a quantum error correcting code. We’ve lost track of the number of attendees who enthusiastically informed us that they expect their attendance to result in new projects and research.
One of the goals of the It from Qubit collaboration is to train a new generation of researchers to be fluent in both quantum-information science and fundamental physics, because that’s where the real progress will come from. They represent the future of the field, unquestionably. At the meeting, Ted Jacobson, a quantum-gravity expert from the University of Maryland, US, observed that much of the progress in the field is already being driven by younger researchers. “Of the 10 people who are producing the most interesting new ideas right now, sparking the field, probably eight of them are young people,” he said. “It’s fantastic, and it gives me great hope for the field…They have the benefit of all the hard work that came before and the revelations of string theory and AdS/CFT duality. And now they’re charging forward with it and really making sense of it.”
Found in translation
Is space–time built from entanglement? Are black holes nature’s most powerful computers? We don’t know yet, but, regardless, it is genuinely exciting to see ideas that were originally formulated for completely different reasons having resonance and utility; a huge number of new ideas get found in translation. It’s given us new perspectives on our own fields. Things that we thought were routine and uninteresting are revealed to be much more profound, while things that we thought were crucially important recede a little into the background. For example, nothing is more routine to an information theorist than the fact that adding noise smudges distinguishability. But in translation, that routine fact becomes an energy constraint on the space–times that can emerge from quantum mechanics. The routine becomes profound. The process of translation is still in its infancy but the growing community of bilingual researchers is rapidly accelerating the pace of progress.
Video recordings of the lectures at the It from Qubit summer school, along with problem sets and solutions, can be found online
Quantum gravity: Einstein meets Schrödinger
(Courtesy: iStock/sakkmesterke)
Quantum theory provides us with a description of physical phenomena on the scale of molecules, atoms and smaller in terms of wavefunctions and probabilities. In contrast, Einstein’s general theory of relativity gives an elegant geometric description of gravity, which dominates the physics of our universe at very long scales. Together, quantum theory and gravity provide us with two remarkably successful and yet exceptionally dissimilar descriptions of the universe.
When one tries to merge quantum theory with gravity, a puzzling new fundamental length scale appears: l2P = ħG/c3, as observed by Max Planck in his seminal 1899 paper (Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin5 440), where ħ is Planck’s constant divided by 2π, G is the gravitational constant and c is the speed of light in a vacuum. This “Planck length” takes an incredibly small value: approximately 10–35 m – some 25 orders of magnitude smaller than the size of an atom, and some 17 orders of magnitude smaller than the smallest distances probed in today’s best experiments. Hence, the study of quantum gravity is usually seen as an exotic regime within the domain of high-energy physics, which aims to understand the fundamental building blocks of our physical universe at the smallest scales. Here the Planck length becomes a central challenge to formulating a consistent theory of quantum gravity as space–time itself experiences dramatic fluctuations at these distances.
The challenge of combining quantum theory and gravity into a single consistent framework has eluded theoretical physicists for over 80 years now. However, in recent years, high-energy theorists have been making exciting progress by borrowing techniques and concepts originally developed in the study of quantum information. The latter is primarily interested in harnessing the weirdness of quantum theory to develop new systems for ultrafast computation and ultra-secure communications. However, it has become clear that quantum information theory is also a powerful new lens through which to examine the conundrums of quantum gravity.
How entanglement is key to the quantum world
(Courtesy: iStock/traffic_analyzer)
No matter how many internal degrees of freedom are in a quantum system, it is described by just one global wavefunction or state. Hence, it is generally impossible to assign a precise state to the individual parts of a quantum system. Even when the global state is certain, the individual parts will be correlated and uncertain, a phenomenon known as quantum entanglement. This feature was first highlighted in a famous thought experiment by Albert Einstein, Boris Podolsky and Nathan Rosen (1935 Phys. Rev.47 777). Their experiment involved measuring the polarization of a pair of photons in what we now call an entangled state, and finding that the results were correlated no matter how far apart the measurements were made.
While the aim of their paper was to expose a flaw in quantum mechanics, we now understand that entanglement is one of the key features distinguishing quantum physics from classical physics. Entanglement has been found to be a powerful resource for cryptography and a necessary prerequisite for ultrafast quantum computations. Moreover, it is an essential characteristic underlying many new exotic phases and phenomena in condensed-matter physics. And, as we hope our article conveys, entanglement has taken a role of increasing importance in quantum field theory and quantum gravity in recent years.
Because of the connection between entanglement and uncertainty, the amount of entanglement can be measured in terms of a kind of entropy. In statistical thermodynamics, one is generally uncertain about the global state of the system, which might be represented by a microcanonical or canonical ensemble. The associated entropy is then S = –kB Σpi log pi where pi is the probability that the system will be found in the i’th microstate, and kB is Boltzmann’s constant.
In John von Neumann’s extension of this formula to quantum theory, the probabilities pi are replaced by the eigenvalues of the corresponding density matrix. There is no reason to restrict this quantum formula to the standard ensembles of statistical mechanics, however. In an entangled quantum system, the density matrix of a subsystem can have a non-zero entropy even when the whole has no entropy at all. That subsystem entropy is known as entanglement entropy. In a many-body system that can be partitioned in many different ways, the associated collection of entanglement entropies gives a detailed map of the correlations between subsystems. Rafael Sorkin was the first to examine partitioning the degrees of freedom in a quantum field theory by considering different spatial regions and as described in the main text, this led him to suggest that entanglement entropy may be the origin of the Bekenstein–Hawking formula.
An unexpected property of skyrmions has been discovered by physicists in Germany and the US. Skyrmions are particle-like regions within a field where all of the field vectors point either towards or away from a single point. They were originally proposed in the 1950s by British physicist Tony Skyrme to explain aspects of particle physics. Researchers have since discovered that some collective excitations of electron spins in solids behave much like skyrmions. These are “topologically stable”, which means that skyrmions are difficult to destroy. This makes them very attractive for use in “racetrack memories” in which data could be encoded in skyrmions that are then moved along a track using electrical currents. Now, Kai Litzius and colleagues at the Johannes Gutenberg University in Mainz and the Massachusetts Institute of Technology have shown that skyrmions can be reliably moved along a racetrack made of layered thin films. Using time-resolved X-ray microscopy, they were also able to measure the “skyrmion Hall angle” between the direction of the electrical current and the motion of the skyrmions. They were surprised to find that the angle depended upon the velocity of the skyrmions – something that had not been expected. Writing in Nature Physics, the physicists speculate that the unexpected property could be caused by the deformation of skyrmions.
Antineutrino detector could monitor nuclear fuel
An antineutrino detector could be used to monitor what type of fuel is being used in a nuclear reactor, according to Patrick Jaffke and Patrick Huber at Virginia Tech in the US. They say that this could be used to confirm that nuclear-weapons material is being destroyed in a reactor. Nuclear fission produces copious numbers of antineutrinos and several reactors worldwide are used as sources for neutrino-physics experiments. It turns out that different nuclides – such as isotopes of uranium and plutonium – produce antineutrinos at different energies, so the composition of nuclear fuel could be determined by measuring the energy spectrum of emitted antineutrinos. However, making such measurements has been impractical because antineutrinos interact very rarely with matter and therefore a huge and expensive detector would be required. Now, Jaffke and Huber have calculated that a relatively small neutrino detector weighing in at just five tonnes could monitor a reactor if it is placed 25 m from the core and takes measurements over several 90 day periods. In a preprint on arXiv, the pair argue that such a detector could differentiate between four major reactor fuels at a statistical certainty of 95%. They also say that the system can differentiate between the burning of mixed-oxide (MOX) fuel with high levels of plutonium and normal MOX fuel. This could be used to verify that plutonium – which could be used in nuclear weapons – is being destroyed.
Dark-matter pioneer Vera Rubin dies at 88
The US astronomer Vera Rubin, who presented convincing evidence of the existence of dark matter, has died at the age of 88. Born in Philadelphia, Pennsylvania, Rubin completed a degree in astronomy at Vassar College in New York state before studying for a masters degree in physics at Cornell University. After graduating in 1951, she then headed to Georgetown University in Washington DC to carry out a PhD in astronomy, which she completed in 1954. After working at the university until 1962, Rubin moved to the Carnegie Institute of Washington, where she remained for the rest of her career. While working with the astronomer Kent Ford in the 1970s, the pair studied the rotation curves of galaxies and uncovered a discrepancy between the predicted angular motion of galaxies and the observed motion, predicting that a huge amount of unseen mass – known as dark matter – must be holding them together.
Sidney Drell: 1926–2016
Sidney Drell, a former deputy director of the Stanford Linear Accelerator Center (SLAC) in the US, has died at the age of 90. Born in Atlantic City, New Jersey, Drell graduated with a degree in physics from Princeton University in 1946 before receiving a PhD from the University of Illinois, Urbana-Champaign, in 1949. After a stint teaching physics at Stanford University in 1950, he headed to the Massachusetts Institute of Technology before returning to Stanford in 1956. From 1969 until he retired in 1998, Drell served as deputy director of SLAC, which is now called the SLAC National Accelerator Laboratory. Apart from making major contributions to quantum electrodynamics and high-energy particle physics, Drell was also an expert in nuclear-arms control and cofounded the Center for International Security and Arms Control (now the Center for International Security and Cooperation). Drell was also an original member of JASON, a group of academic scientists that advises the government on national security and defence issues.
Fermilab pioneer Ned Goldwasser dies at 97
Ned Goldwasser, who was the first deputy director of Fermilab, has died at the age of 97. Beginning in 1967, Goldwasser oversaw the construction of the National Accelerator Laboratory (renamed Fermilab in 1974) near Chicago, scheduled its experimental programme, and managed its Program Advisory Committee. Working with director Robert Wilson, he also created and implemented Fermilab’s pioneering employment equality programme. Goldwasser stepped down as deputy director in 1978 and returned to his academic career at the University of Illinois. He also worked on the cancelled Superconducting Super Collider project as well as the LIGO gravitational-wave collaboration. Goldwasser attended Harvard University before serving in the US Navy. He received his PhD in physics from the University of California, Berkeley and joined the University of Illinois in 1951.
You can find all our daily Flash Physics posts in the website’s news section, as well as on Twitter and Facebook using #FlashPhysics.
Happy New Year from all the team at Physics World!
To get things off to a cracking start, check out the January issue of Physics World magazine, which has a wonderful feature by Patrick Hayden and Robert Myers about how the study of “qubits” – quantum bits of information – could be key to uniting quantum theory and general relativity. The issue is now live in the Physics World app for mobile and desktop, and you can also read the article on physicsworld.com from tomorrow.
Elsewhere in the new issue, you can discover how physicists have waded into the debate over whether magnetic fields can control neurons and enjoy a great feature on why some birds don’t kick out intruder cuckoo eggs.
You can also find out just why so many physicists are worried about Donald Trump’s imminent inauguration as US president.
“Badass”. That was the word Harvard University neuroscientist Steve Ramirez used in a Tweet to describe research published online by fellow neuroscientist Ali Güler and colleagues in the journal Nature Neuroscience last March. Güler’s group, based at the University of Virginia in the US, reported having altered the behaviour of mice and other animals by using a magnetic field to remotely activate certain neurons in their brains. For Ramirez, the research was an exciting step forward in the emerging field of “magnetogenetics”, which aims to use genetic engineering to render specific regions of the brain sensitive to magnetism – in this case by joining proteins containing iron with others that control the flow of electric current through nerve-cell membranes.
By allowing neurons deep in the brain to be switched on and off quickly and accurately as well as non-invasively, Ramirez says that magnetogenetics could potentially be a boon for our basic understanding of behaviour and might also lead to new ways of treating anxiety and other psychological disorders. Indeed, biologist Kenneth Lohmann of the University of North Carolina in the US says that if the findings of Güler and co-workers are confirmed then magnetogenetics would constitute a “revolutionary new tool in neuroscience”.
The word “if” here is important. In a paper posted on the arXiv preprint server in April last year and then published in a slightly revised form in the journal eLife last August, physicist-turned-neuroscientist Markus Meister of the California Institute of Technology laid out a series of what he describes as “back-of-the-envelope” calculations to check the physical basis for the claims made in the research. He did likewise for an earlier magnetogenetics paper published by another group in the US as well as for research by a group of scientists in China positing a solution to the decades-old problem of how animals use the Earth’s magnetic field to navigate – papers that were also published in Nature journals.
In all three cases, Meister finds that the relevant magnetic interactions are between five and 10 orders of magnitude too weak to account for the claimed effects. As such, he concludes, the “claims are in conflict with basic laws of physics”. He adds: “If the reported phenomena do in fact occur, they must have causes entirely different from the ones proposed by the authors.”
Lighting the way Magnetogenetics follows in the footsteps of the established technique of optogenetics, in which neurons can be controlled by inserting a light source (left) into the brain (right). (Courtesy: Professor John Rogers, University of Illinois/Science Photo Library)
The three groups under attack have defended themselves by arguing that experiment must trump theory, particularly in the messy world of biology. But Meister insists that it is vital to offer a viable physical explanation of the purported effects because, he argues, claims that violate the laws of physics often turn out to be wrong – as was the case, he says, with the idea of “water memory” put forward by French immunologist Jacques Benveniste in the 1980s to explain homeopathy.
For Meister, the issues at stake regard the sociology of science as much as they do the science itself, particularly what he sees as high-profile journals’ unhealthy appetite for eye-catching, as opposed to robust, research. He criticizes the authors for not having carried out simple calculations to check their results, but above all takes aim at the journal editors and reviewers, who he argues need a far more rigorous approach to vetting submitted manuscripts. For the research in question, he maintains, their level of quality control “borders on malpractice”.
The attraction of magnetism
Doctors currently use a number of electrical and magnetic techniques for stimulating the brain in order to treat psychological, neurological and other conditions. However, each technique has its down-sides, such as the need to place electrodes inside people’s skulls and the inability to target specific groups of neurons. But in the meantime scientists have been using genetic engineering to try to improve targeting. One option is to make neurons sensitive to certain drugs, but this method tends to be slow-acting and imprecise. A quicker, more accurate alternative, known as optogenetics, involves controlling neurons using light, but this requires optical fibres to be inserted into the brain.
The virtue of a magnetic technique lies in magnetic fields’ very weak interaction with biological tissue, which means they can penetrate deep inside the brain unguided by wires, fibres or any other conduit. But this weak interaction is also a curse as it makes intercepting fields where required very hard to do. Nevertheless, Güler’s group reckoned it had found the answer by fusing two proteins together. One of these, known as ferritin, is paramagnetic, which means that it becomes magnetized when placed in an external magnetic field. The other protein, called TRPV4, acts as an “ion channel” across cell membranes – a “gateway” for ions, which can be opened or closed by mechanical forces. In the presence of a static magnetic field, the researchers reasoned, the magnetized ferritin would “tug open” TRPV4’s central pore and allow an electric current to flow out of nerve cells.
In their paper (2016 Nature Neurosci.19 756), Güler and colleagues list numerous lines of experimental evidence to back up their claims. For example, they report delivering a DNA sequence, containing the gene encoding its TRPV4–ferritin complex, into zebrafish larvae (targeting trunk and tail neurons), and finding that they could make the larvae coil by placing them in a magnetic aquarium. The researchers also showed they could entice mice into a magnetic chamber by giving them a virus with the same gene, but which was instead targeted at a region of the animals’ brain containing dopamine-releasing neurons.
This research followed an earlier published article (2015 Nature Med.21 92) – one of the other two papers Meister has scrutinized – in which geneticist Jeffrey Friedman of Rockefeller University in New York and colleagues described how they instead fused ferritin to TRPV1, an ion channel that responds to changes in temperature. The idea here was to use a high-frequency magnetic field to continually flip the spins of the constituent iron atoms so that they dissipate heat when dropping back to their equilibrium position. The resulting rise in temperature would then open the ion channel and allow a current to flow.
Friedman’s group reports the results of several experiments, including one in which it genetically engineered stem cells and viruses in order to deliver genes encoding the TRPV1–ferritin protein complex, together with genes encoding insulin, into diabetic mice. Subsequently exposing those mice to either radio waves or a magnetic field reduced their blood sugar levels, the researchers found.
Lacking moments
These papers are not the first in which scientists have reported being able to switch neurons on and off by exposing iron-based nanoparticles in the brain to magnetic fields. Among others, both Friedman’s group (2012 Science336 604) and one at the Massachusetts Institute of Technology led by materials scientist Polina Anikeeva (2015 Science347 1477) reported activating TRPV1 magnetically. However, they did so using synthetic nanoparticles made from the iron oxide magnetite.
Because these particles are made outside the brain and then fed into it – in the case of Anikeeva’s group, simply using a syringe – they cannot be easily targeted at specific types of neuron. The research scrutinized by Meister is designed to overcome this problem by using genetic engineering to enable the creation of magnetic particles within brain cells. However, those particles – ferritin – are magnetic tiddlers. Each magnetite particle measures at least 20 nm across and contains several hundred thousand closely packed iron atoms, which interact strongly with one another and give the magnetite an inherent magnetic moment. Ferritin, in contrast, is a spherical shell surrounding a roughly 5 nm-diameter core containing just a few thousand, loosely packed iron atoms that lack the mutual interaction needed to generate a magnetic moment in the absence of an external field.
In his analysis, Meister makes a series of calculations showing ferritin to be completely unsuited to magnetogenetics. He points out that ferritin’s “induced” moment is proportional to the strength of the external field that is applied, and as such is many orders of magnitude smaller than that of magnetite.
1 Forces at play
(Courtesy: CC BY 4.0 eLife)
Physicist-turned-neuroscientist Markus Meister critically examined four possible scenarios in which a magnetic field could be used to induce magnetic moments in ferritin particles (green) in order to open the ion channel TRPV4 (pink). He calculated the forces created in each case, concluding that they would all be too weak to open TRPV4. The magnetic field B induces a moment m in the ferritin core, leading to a force F or a torque N on the ferritin particle, with the resulting forces tugging on the channel.
In Meister’s paper (2016 eLife5 e17210), he first works through four scenarios (figure 1) in which a magnetic field could impart a mechanical force on the ferritin, concluding, contrary to the claim by Güler’s group, that any such force would be far too puny to open TRPV4. He shows that the force created by a field gradient pulling on the ferritin (figure 1a) would be nine orders of magnitude smaller than that needed to open the (well-studied) ion channels in auditory hair cells, while the interaction between neighbouring ferritin particles (figure 1b) would fall short by eight orders of magnitude. To be on the safe side, he also calculates an absolute thermal limit in the second scenario. (This is a measure of how much the system is jiggling thanks to its thermal energy. For an effect to be significant – a signal, rather than noise among the jiggling – it has to be of similar or larger magnitude.) But Meister reaches a similar result: the energy imparted by the field would be about 100 million times smaller than the ion channel’s thermal energy.
Multiple ferritin particles pulling on a cell’s membrane would generate a stress at least a million times smaller than that needed
Meister finds that even the best of the four scenarios, in which ferritin feels a torque by virtue of being more easily magnetized along a certain axis (figure 1c), would see magnetic energy four orders of magnitude smaller than thermal energy. Last, he calculates that multiple ferritin particles pulling on a cell’s membrane (figure 1d) would generate a stress at least a million times smaller than that needed to deform the membrane so as to create ion channels across it.
In then analysing the results from Friedman and colleagues, Meister explains that magnetic heating of nanoparticles becomes very inefficient for particles below about 10 nm in diameter. Indeed, he says that previous research has shown ferritin’s “specific loss power” – the heating power generated per unit mass of material – to be “too low to be measurable”. To “keep the argument alive”, as he puts it, he instead considers the heat given off by an artificial form of ferritin filled with cobalt-doped manganite instead of straightforward iron.
He calculates that the temperature rise on the material’s surface in Friedman’s experiments would be a mere 10–10 K – more than 10 orders of magnitude less than the roughly 5 K needed to activate a TRPV1 ion channel. As a further check, he works out the overall effect on a nerve cell due to the combined heating of 10,000 ferritin particles crammed onto the cell’s surface. Here too the temperature rise would be minuscule – about 10–9 K.
For Meister, these disparities rule out any possibility that the effects claimed by the two groups are caused by magnetic interactions. “The discrepancy in these cases is like claiming to have built a perpetual motion machine,” he says. “Even the patent office no longer accepts such claims because they agree that we understand the physics well enough to dismiss them.”
Off course
Meister only penned his critique of the magnetogenetics research after having first read the paper on animal navigation, which was published by Xie Can of Peking University and colleagues in Nature Materials online in 2015 (15 217). As he puts it, the two subjects are “closely linked, because uncovering nature’s method for magnetosensation can point the way to effectively engineering magnetogenetics”.
He says that he immediately knew the paper by Xie and co-workers “was wrong”, having worked on the navigation of bacteria as a PhD student under bio-physicist Howard Berg. He explains that only in the case of what are known as magnetotactic bacteria is it well understood how an organism can sense and make use of the Earth’s weak magnetic field. These creatures act like tiny compass needles by synthesizing and lining up crystals of magnetite, which allows them to follow magnetic field lines down to their preferred habitat – the muck at the bottom of ponds.
The proposal by Xie’s group also involves a compass-needle like structure, but in this case the structure consists of a single molecule: a rod formed from an iron–sulphur protein surrounded by light-sensitive proteins. The researchers claim that the molecule has an intrinsic magnetic moment, having observed its orientation parallel to the Earth’s magnetic field using electron microscopy. They also report that the genes encoding the magnetic core are found in many animal species.
Light up Zebrafish young – shown here in a ×30 scanning electron micrograph – are transparent, which makes them useful for scientific research. (Courtesy: Steve Gschmeissner/Science Photo Library)
However, as Meister points out, the magnetism of the iron–sulphur protein would be puny even by the standards of ferritin – the molecule contains a mere 40 iron atoms spread out over 24 nm. At any temperature above a few degrees kelvin, he argues, the atoms’ thermal energy would randomize their spin directions, so preventing the formation of a magnetic domain and with it a permanent magnetic moment. He goes on to work out that even if the 40 atoms did somehow manage to form a domain, there is still no way they could line up with the Earth’s magnetic field because their interaction with it would be 100 000 times weaker than their thermal motion. “Clearly the reported observations must arise from some entirely different cause, probably unrelated to magnetic fields,” he writes.
Searching for wiggle room
In responding to Meister’s criticisms, all three groups insist on the primacy of experimental results. In a written statement sent to eLife, the two American groups in his cross hairs argue that “the intrinsic complexity of biologic processes” can limit the utility of theoretical calculations, and assert that “mathematical theory needs to accommodate the available data, not the other way around”. They express surprise that Meister would “stridently question” the validity of the two independent data sets without carrying out any experiments of his own.
As to exactly what they think is responsible for their observations, they won’t say. They give no hint in their written statement – simply saying that “the precise mechanism is undetermined” – and declined to answer more specific questions regarding this and other aspects of their work put to them by Physics World. The one thing that Friedman did add, via e-mail, was that he and the rest of his group “do not believe this [i.e. the mechanism in their experiment] to be a thermal effect”.
Xie, meanwhile, argues that it is “extremely challenging” to do simple calculations on the complex system he and his colleagues have studied. He maintains that the light-sensitive proteins can regulate the magnetic sensitivity of the iron–sulphur core, and that therefore the system’s overall magnetic moment cannot be calculated by considering the properties of the 40 iron atoms in isolation. “The data are what they are,” he says. “All we have to do is to interpret it carefully, or find an answer for it.”
Lohmann points out that physicists have sometimes been mistaken in the past when making elementary analyses of biological phenomena, most notably when dismissing the very idea that animals could detect the Earth’s magnetic field. In the 1960s they had argued that the terrestrial field interacts too weakly with biological tissue to be detectable, but Lohmann says that almost all scientists now accept the reality of animals’ magnetic sense. He notes that there are two different mechanisms that could at least in principle provide the sensitivity needed, one involving interactions with magnetite and the other relying on extremely complex chemical reactions featuring pairs of free radicals. “In that case simple back-of-the-envelope calculations yielded incorrect answers because the phenomenon involved mechanisms that people hadn’t thought of,” he says.
Taking control In the nascent field of magnetogenetics, researchers are trying to control genetically engineered neurons deep in the brain using magnetic fields. (Courtesy: Dennis Kunkel Microscopy/Science Photo Library)
However, Lohmann maintains that the current dispute is a little different. He says that “time will tell” who is right, but argues that the magnetic interactions being claimed by the two American groups are “fairly straightforward” and adds that much is known about the forces needed to open ion channels – circumstances, he says, that “leave less room for unknown factors to complicate the analysis”. He also says that questions remain about the proposed biocompass put forward by Xie’s group, particularly whether such a structure actually exists in any animal and just how the protein complex would convert magnetic-field information into an electrical signal that could be interpreted by the brain.
Irreproducible, so far
Another person critical of the three papers is Anikeeva. She co-authored an article with fellow MIT researcher Alan Jasanoff for eLife endorsing Meister’s critique, and argues it was “premature” of the various groups to claim a magnetic origin for the effects they observed (2016 eLife5 e19569). She also says that Friedman and colleagues are wrong to state that they stimulated neurons with radio waves, pointing out that the kind of solenoid they used to generate alternating magnetic fields creates negligible electric fields. “These devices don’t produce radiation,” she says.
(Friedman’s colleague Jonathan Dordick, a bio-chemical engineer at the Rensselaer Polytechnic Institute in upstate New York, replied in an e-mail that their device “creates an electromagnetic field, which by definition generates radiation”. He added “Whether or not one refers to this as radio waves is a semantic issue that does not alter our experimental results.”)
On the research discussion website PubPeer, meanwhile, people have taken aim at the experimental procedures and statistical analyses employed by the three groups, particularly those of Güler. One anonymous user argued that Güler and colleagues had failed to disentangle the effects on zebrafish larvae of magnetism and bright light, given that the latter, which is also known to induce coiling, appeared to be present only when the magnet was switched on. (Güler responded in an e-mail by saying that the difference in lighting was only apparent and was caused by reflections when the magnets were present.)
To encourage others to try and replicate their results, Güler distributed copies of DNA used by his group to a number of other scientists. But as Physics World went to press, there had been no reports of anyone else confirming the claims of his group or those of the other two. Indeed, Meister says that he knows of several other groups that have tried to reproduce one or other of the claimed results but that none has so far succeeded.
Meister praises Güler for stimulating further experimental work, but believes that the original research should never have been published in the first place. He acknowledges that biologists might not have time to “keep their physics skills sharp” but argues that they should at least thrash out their ideas with physicist colleagues. Had they done so and, as a result, decided not to go ahead with their experiments, he maintains “they would have saved them-selves and would-be imitators a lot of time”.
Above all, however, Meister blames the journals. He says there is good evidence that a “large fraction” of all papers published in high-profile journals are “wrong or irreproducible”, in part, he maintains, because their editors “thirst for ‘novelty’ ”. He also says that he has had little joy previously in persuading editors to publish corrections or commentaries on questionable research. “Journals have very little interest in drawing attention to critiques of great stories that they have published before,” he says. “These things get swept under the rug as much as possible.”
(Physics World sent e-mails to the chief editors of the three journals that published the disputed papers but got no response besides that of a spokesperson for their publisher, Springer Nature. The spokesperson said that for “confidentiality reasons” staff were not able to comment on the “editorial history or review process” of any paper published in a Nature journal.)
Future promise
Although Meister and the other critics lament what they see as the three groups’ misunderstanding of basic physics, they nevertheless underline the importance of continued research in the field. Anikeeva says she is “prepared to give the benefit of the doubt” regarding Friedman’s and Güler’s experimental results, arguing that other groups should carry out experiments of their own to try and understand just what the underlying physical mechanisms could be. Her group will do its bit, she says, by systematically exposing iron-containing proteins to different combinations of magnetic field, illumination and chemical conditions.
Special sense Magnetotactic bacteria contain organelles called magnetosomes (yellow) where crystals of magnetite are produced, enabling these organisms to navigate using the Earth’s magnetic field. (Courtesy: Dennis Kunkel Microscopy/Science Photo Library)
And while Meister retains that for the most part ferritin’s induced magnetic moment is too feeble to enable practical realizations of magnetogenetics, he does think that one of the mechanisms he scrutinized could be used in the future. He estimates that a magnetic field of about 5 Tesla – some 100 times larger than that used by Güler’s group – might be big enough to stimulate membrane currents by exploiting ferritin’s anisotropy, adding that stretching ferritin proteins into rod-like shapes would enhance the effect. More generally, he says, replacing ferritin with particles possessing a permanent magnetic moment, such as bacterial magnetite, “may offer a physically realistic route to magnetogenetics”.
It is this potential for making magnetogenetics a practical proposition that renders publication of the three controversial papers “unfortunate”, argues Meister. “Now that the prize for magnetogenetics has seemingly been taken, what motivates a young scientist to focus on solving the problem for real?” he asks. What can help, he maintains, is peer review of papers after they have been published, via websites such as PubPeer. That can “reopen the claimed intellectual space for future pioneers” he writes.
Meister accepts the possibility that the experimental data obtained by Güler, Friedman and their colleagues are in fact correct, and that, if so, the researchers may have stumbled across new physics. That would be “fantastic if true”, he says, “because it would fundamentally change our understanding of nanoscale matter” and, he adds, undoubtedly earn them a Nobel prize. But he reckons the odds on that are slim, particularly given the amount that scientists already know about very small-scale magnetism thanks to industry’s push for ever higher density disk drives. “I would appreciate being wrong but unfortunately I don’t think that is going to happen,” he says