Skip to main content

Single-atom gates open the door to quantum computing

A quantum-information analogue of the transistor has been unveiled by two independent groups in Germany and the US. Both devices comprise a single atom that can switch the quantum state of a single photon. The results are a major step towards the development of practical quantum computers.

Unlike conventional computers, which store bits of information in definite values of 0 or 1, quantum computers store information in qubits, which are a superposition of both values. When qubits are entangled, any change in one immediately affects the others. Qubits can therefore work in unison to solve certain complex problems much faster than their classical counterparts.

Qubits can be created from either light or matter, but many researchers believe that the practical quantum computers of the future will have to rely on interactions between both. Unfortunately, light tends only to interact with matter when the light is very intense and the matter is very dense. To make a single photon and a single atom interact is a challenge because the two are much more likely to pass straight through each other.

Chambers of light

In 2004 physicists Jeff Kimble of the California Institute of Technology and Luming Duan of the University of Michigan proposed a scheme to make it work. Their idea was to place an atom inside an optical cavity – a tiny mirrored chamber in which the walls are separated at a distance similar to the wavelength of light. If a photon incident on the cavity has just the right wavelength to make the cavity resonate, it will be absorbed, reflect off one of the mirrors and come back out again. In this process, the waveform of the departing photon gets shifted along a little – it experiences a “phase shift”.

The trick is that the resonance of the cavity depends on the state of the atom. If the atom is in a different state, the cavity does not resonate with the incident photon, and the photon simply bounces off without ever receiving a phase shift. In this way, the state of the atom controls the phase-state of the transmitted photon. This is like a computer transistor, in which a gate voltage controls the flow of electric current.

A decade on, Stephan Ritter and colleagues at the Max-Planck Institute for Quantum Optics in Garching have implemented Kimble and Duan’s proposal using an optical “Fabry–Perót” cavity, consisting of two curved mirrors roughly half a millimetre apart. Meanwhile at Harvard University and the Massachusetts Institute of Technology, Mikhail Lukin and colleagues have implemented the proposal on a silicon chip with a cavity measuring just a few microns in size, which further enhances the photon–atom interaction. In both demonstrations, it is the spin of the trapped atom – which can be up or down – that controls the resonance of the cavity.

Superposition and entanglement achieved

Both groups have shown that they can prepare the atom in a superposition of up and down spins, therefore allowing – in principle, at least – quantum logic operations to be performed. Ritter and colleagues went one step further by demonstrating that their gate generates entanglement between the atom and photon, so that qubits of information can be transferred from one to the other.

The optical quantum computer is not yet round the corner. But these experiments at least give a direction
Klemens Hammerer, Leibniz University

Klemens Hammerer of Leibniz University in Germany believes that both experiments present a “breakthrough”, but warns that they are, as yet, only proofs of principle. “The set-up involved – simple as it sounds – comes with a large technical overhead: the experiments typically fill an entire laboratory,” he says. “For real-life applications of optical quantum-information processing, one would require a large number of photons, which can be brought to interaction one by one”, says Hammerer. “The optical quantum computer is not yet round the corner. But these experiments at least give a direction.”

Both groups are now attempting to link several atoms in optical cavities to build a prototype quantum network, or a prototype quantum computer. “As a first step, we are currently working on positioning two atoms inside the same optical cavity, with the goal of using light in the cavity to perform a quantum gate operation between the two atoms,” says Jeff Thompson of Lukin’s group.

The research is published in separate papers in Nature.

  • There is much more about the benefits and challenges of quantum computing in this podcast

BOSS uses 164,000 quasars to map expanding universe

By Calla Cofield at the APS April Meeting in Savannah, Georgia

Scientists looking at data from the Baryon Oscillation Spectroscopic Survey (BOSS), the largest programme in the third Sloan Digital Sky Survey, have measured the expansion rate of the universe 10.8 billion years ago — a time prior to the onset of accelerated expansion caused by dark energy. The measurement is also the most precise measurement of a universal expansion rate ever made, with only 2% uncertainty. The results were announced at a press conference at the APS’s April meeting on Monday, at the same time that the results were posted on the arXiv preprint server.

The rate of universal expansion has changed over the course of the universe’s lifetime. It is believed to have gradually slowed down after the Big Bang, but mysteriously began accelerating again about 7 billion years ago (by rippstein). BOSS and other observatories have previously measured expansion rates going back 6 billion years.

To measure astronomical distances, astronomers will occasionally use so-called “standard candles” – these are supernovae with known luminosities. The difference between the known luminosity and the apparent luminosity indicates the supernova’s distance. The BOSS results measure the expansion factor of the universe using a “standard ruler” – a known distance between celestial objects. The expansion rate can be deduced when the known distance between two objects is compared to their apparent distance and each of their redshifts (the degree to which the light from those objects is stretched due to the expansion of the universe).

The “standard ruler” in this case is the imprint left over from sound waves in the early universe, also known as baryonic acoustic oscillations, or BAOs. These sound waves from the early universe should have created regularly spaced areas of high and low density in regular matter. For example, scientists see an excess of pairs of galaxies separated by about 450 million light-years – the length of the BAO ruler.

The BOSS experiment looked at the distance between quasars and nearby rings of gas. Quasars – galaxies with a supermassive black hole at their centre – are some of the brightest objects in the universe as a result of radiation from tremendous amounts of material falling into the black hole. Because the quasars formed in areas particularly dense with gas and dust, the imprint of the BAO is strong: it appears as a ring of gas roughly 450 million light-years away from the centre of the quasar.

“The scale of that ring is precisely this baryonic acoustic oscillation…and that’s what we’re trying to measure,” says Andreu Font-Ribera of Lawrence Berkeley National Laboratory and one of the authors on the new paper.

Quasars not only provide an imprint of the BAO, they are also some of the most visible objects at such great distances from the earth. Even supernovae or entire galaxies (of the non-quasar variety) are virtually invisible at a distance of 10.8 billion light-years from earth. The BOSS team studied more than 164,000 quasars to make their measurement.

Patrick McDonald, a researcher at Lawrence Berkeley National Laboratory, also spoke at the press conference. McDonald, who is doing research on BOSS data but is not an author on the new paper, says that measuring the universe’s rate of expansion could help scientists crack the mystery of dark energy. “We call it dark energy but that’s really a place holder at this point for us just not knowing how to explain this acceleration,” he explains. “To me it seems quite possible that it’s related to some fundamental hole in our understanding of physics.”

McDonald draws a comparison to the state of physics in the late 19th century, when inconsistencies in experiments attempting to measure the speed of light eventually gave way to entirely new worlds of physics, including quantum mechanics and general relativity. “This acceleration seems to me like one of our best clues to, possibly, something new like that,” says McDonald. “So that’s why we want to make these more accurate measurements.”

Acoustic metamaterial can be reconfigured in a jiffy

A metamaterial with acoustic properties that can be reconfigured in less than one tenth of a second has been made by researchers at the University of Bristol in the UK. Created by Mihai Caleap and Bruce Drinkwater, the device comprises tiny polystyrene spheres suspended in water. The spheres arrange themselves in a cubic lattice that is defined by criss-crossing acoustic standing waves. The lattice blocks sound at certain frequencies that depend on the spacing between the spheres and, with further development, it could be used to create lenses that focus sound or even acoustic cloaks.

Caleap and Drinkwater’s work builds on the experience physicists have gained over the last few decades in making “optical lattices” by shining criss-crossing laser beams through a dilute gas of ultracold atoms. Depending on the wavelength of the light and the type of atom, the atoms are drawn to regions of either high or low light intensity formed by standing waves of light. The result is a crystalline lattice with one atom per maxima or minima that physicists can then use to study fundamental quantum phenomena.

A sound idea

The Bristol duo has now essentially done the same thing with standing waves of sound at ultrasonic frequencies. Using piezoelectric loudspeakers, they set up standing waves in the x, y and z directions in a finger-tip-sized sample of water. This creates a 3D square lattice of regions of high and low density that is analogous to the bright and dark regions of an optical lattice.

When tiny polystyrene balls about 90 µm in diameter are placed in the water, they settle in the nodes of the standing waves, creating a 3D square lattice. The lattice spacing is related to the wavelength of the standing waves. For a 3.75 MHz ultrasound signal, for example, the spacing is about 279 µm.

Unique and useful

Microscope image of 90-micron diameter spheres arranged in a square lattice

To demonstrate that the system works as an acoustic metamaterial, the researchers measured its ability to transmit ultrasound waves between 2 and 12 MHz. For randomly arranged spheres, the transmission fell as expected at about 6 MHz and did not recover at higher frequencies. In the case of a square lattice of the same density, however, the transmission spectrum was very different. The researchers studied square lattices with three different spacings and found peaks and troughs in the spectra between 6 and 12 MHz.

This behaviour is indicative of a “phononic crystal”, in which sound at some frequencies passes freely through the material, whereas signals at other frequencies are reflected back. It is the same effect that occurs to electrons in crystalline materials – giving rise to electronic band structure – and also to light in photonic crystals.

Rapid reconfiguration

Although phononic crystals have been made before, Caleap and Drinkwater say that theirs is unique because it can be reconfigured in about 0.05 seconds by changing the frequency of applied sound waves. It could therefore be used to make a reconfigurable ultrasound filter for medical applications.

Although their photonic crystals have wavelengths in the 100 µm range, the technique could be extended a wider range of lattice spacings of even up to metres. Drinkwater says that the method will work on “nearly all” solid-fluid combinations and will enable “almost any geometry to be assembled”, while being in addition cheap and easy to integrate with other systems. In fact, the metamaterial is also expected to work on electromagnetic radiation at terahertz frequencies. This suggests that the technology could be used in filters and beam deflectors for security scanners that use this notoriously difficult-to-handle part of the electromagnetic spectrum.

Ultrasonic superlenses

The team is currently working on acoustic lenses that can be reconfigured in real time and that are capable of “super-resolution” imaging, which means it could be part of a system that uses ultrasound to resolve features much smaller than the wavelength of the sound. To create such superlenses, Caleap and Drinkwater are using larger arrays of piezoelectric transducers that create standing waves over a wide range of frequencies. Such arrangements can create acoustic lattices with parameters that vary in space and therefore have the desired lensing properties.

The researchers have also found evidence that their suspensions of tiny balls share an important property with some metamaterials used to create electromagnetic invisibility cloaks: a negative index of refraction. This occurs because of a resonant interaction between the sound waves and individual spheres in the suspension.

The research is described in the Proceedings of the National Academy of Sciences.

Why axing practicals from science exams is a bad idea

I don’t know about you, but I look back rather nostalgically on the practical exams that I took as an 18 year old as part of my A-levels in physics and chemistry. At the time, I wasn’t looking forward to them at all – they lasted three hours each and there was always the very large possibility of completely mucking up your experiment and/or dropping all your samples on the floor.

Although I’ve forgotten everything about my physics practical exam, the chemistry practical still sticks out in my mind. I remember making some needle-like crystals that, through amazing good fortune, turned out really well – certainly far better than the watery mush I’d created in my mock exams. So when I walked over to the other side of the lab to measure the temperature at which the crystals melted, they did so over a really narrow range – and presumably at the “correct” temperature too.

All in all then I was pretty relieved with that particular exam and was just glad the ordeal was over.  I’ve no idea how well I did in my practicals, but I must have done okay as my overall A-level grades were good enough for me to get me into university.

Now, though, news has broken that Ofqual – the independent body that monitors exam standards in England – is to reform science A-levels so that the final grades are based entirely on written tests, with practical exams no longer counting towards a student’s final mark. Science practicals will still take place, but the scores will be recorded separately on exam certificates as either “pass” or “fail”. Practical exams currently make up 20–30% of a student’s final score.

According to The Times, Ofqual has reformed the system because it is concerned that students use Twitter and other social-media sites to discuss assignments set in the practical exams, which – for understandable reasons – they do not all perform at the same time. As a result, those pupils who do their practical exam later can gain an unfair advantage. Other concerns are that schools teach only a narrow range of experimental skills to suit what will come up in the exam and that practicals aren’t really the best way to monitor a student’s experimental ability.

Ofqual’s decision has come in spite of a campaign by learned societies, including the Royal Society, the Wellcome Trust and the Institute of Physics, which publishes Physics World, to keep practicals as part of the main A-level grade. They fear that the change, which will be introduced from next year, will discourage schools from carrying out experiments. They are also concerned that although experimental work is recorded separately, universities will probably just ignore it – or not put much stall on it – when deciding whether to accept a student onto a course.

Worst of all, the reforms seem to say that experiments are not integral to science, but a kind of weird adjunct to it. And I’m not entirely sure how just getting “pass” or “fail” recorded separately on a certificate is going to make anyone particularly want to shine in their practical exams. I can imagine someone who fails will just shrug their shoulders and put it down to being cack-handed on the day. As for someone who is experimentally gifted, just getting a “pass” isn’t exactly a great motivation to push themselves further.

Writing in the Guardian, Jonathan Osborne defends the new system, saying the current approach does not put enough emphasis on experimental design or interpreting the data. But I’m not convinced and I’d be interested to know how practical skills are marked elsewhere in the world. The bottom line is – do you think Ofqual has made the right call?

Have galactic ‘radio loops’ been mistaken for B-mode polarization?

“Radio loop” emissions, rather than signatures of the early universe, could account for the observation of B-mode polarization announced by the BICEP2 collaboration earlier this year. That is the claim of a trio of cosmologists that has found evidence that local structures in our galaxy generate a polarized signal that was previously unknown to astronomers studying the cosmic microwave background (CMB). The new foreground, which can be detected in the radio and microwave frequencies, is present at high galactic latitudes and could potentially be misinterpreted as a B-mode polarization signal caused by primordial gravitational waves, thus casting doubt on the BICEP2 finding.

There are two important sources of electromagnetic emissions in our galaxy that researchers need to account for while carrying out large-scale surveys of the CMB. They are synchrotron radiation from electrons moving in the galactic magnetic field and polarized emission from dust, with the latter being particularly poorly understood. Surveys of our galaxy carried out as early as 1971 also found evidence of “radio loops” or diffuse radio emission, which stood out against the galactic radio background. These loops are now thought to be caused by ancient supernova remnants that have grown to colossal sizes of 100 to 300 parsecs, after continually expanding for tens to hundreds of thousands of years. These expanding shells of gas and dust were accelerated by the supernova’s shock waves or by stellar winds.

Supernova shells

Subir Sarkar of the Particle Theory Group at the University of Oxford and the Niels Bohr Institute in Copenhagen, who has been studying these radio loops since the 1980s, wondered whether the supernova shells also trapped dust, in which case they might constitute an important foreground in experiments studying the CMB. Sarkar, along with Philipp Mertsch at Stanford University in the US and Hao Liu, also from the Niels Bohr Institute, used data from NASA’s Wilkinson Microwave Anisotropy Probe (WMAP) and found that the radio-loop foreground has indeed evaded the usual “cleaning” methods employed by both the WMAP and Planck experiments.

It is fairly well known that the radio loops produce some sort of synchrotron emission, thanks to charged particles from cosmic rays gyrating inside the magnetic field of the shells. What is new is the suggestion that dust grains, which are enriched with metallic iron or ferrimagnetic molecules, might produce shorter-wavelength radiation that is polarized because of the alignment of the grains with the galactic magnetic field. Surprisingly, Sarkar and colleagues found evidence for this radiation not only at radio frequencies, but also at microwave frequencies. This might result in a significant contamination of the B-mode signal apparently detected by BICEP2, especially as the region of the sky studied by the telescope is crossed by one of these loops.

Map of the universe showing the possible locations of galactic loops

Sarkar says that if the BICEP2 experiment has seen B-mode polarization, “they do not know if this is cosmological in origin, with momentous implications for gravitational waves from inflation or just foreground”. The BICEP2 researchers discount the latter option by cross-correlating what they see with the best available models of the galactic foreground. “However, these models do not include the new source of foreground we have identified. [BICEP2] have not made their sky maps public so we cannot check if what they have seen correlates with these foreground structures – one of which crosses the very region of the sky they looked at,” says Sarkar. It remains to be seen whether Sarkar’s findings will nullify BICEP2’s claim of having seen primordial gravitational waves, or if the foreground ultimately does not affect the finding.

More frequencies needed

“My greatest worry about the BICEP2 results is that the measurement is made at a single frequency, 150 GHz. In order to be convinced that the signal is cosmological, rather than arising from a foreground source, I would need to see it confirmed by other measurements at different frequencies,” says Peter Coles, a physicist at the University of Sussex who was not involved in the new work. He explains that a truly cosmological signal would look the same irrespective of frequency, while a foreground emission would be frequency-dependent. “The question should be asked whether the radiation pattern in the part of the sky observed by BICEP2 correlates with measurements at different frequencies, such as those observed by Planck. If the answer to this is ‘yes’, then that is more evidence that BICEP2 may have measured polarized emission from galactic dust rather than from the Big Bang.”

David Spergel of Princeton University in the US, who was also not involved with Sarkar’s research, says that the radio-loop emission is weak enough to not be a “significant contaminant” in the WMAP and Planck temperature maps. “However, since the polarization signal is less than 100th of the temperature signal, these subtle effects are much more important for analysis of the polarization data,” he says.

Essential cross-checks

While Sarkar’s recent paper on the work only considers WMAP data, he told physicsworld.com that the same structures are also seen in the public Planck maps. “In my view, the community has been rather uncritical in accepting [BICEP2’s] claim without waiting for the essential cross-checks, such as whether the same signal is seen at several different frequencies, and independent confirmation by Planck,” says Sarkar.

Fortunately, Sarkar, Coles and Spergel all agree that all eyes are now on the upcoming polarization data from the Planck satellite, which should clarify the situation within the year. “Planck has polarization measurements across the whole sky at multiple frequencies, so it will provide both a detailed characterization of galactic emission and should hopefully also confirm the BICEP2 results and show convincingly that the emission is cosmological,” says Spergel.

The research is described in a preprint on the arXiv server.

Update: The paper has been published in the Astrophysical Journal – Letters.

Hunting for neutrinos

The hunt for the Higgs boson may have dominated headlines for the last few years, but Ray Jayawardhana’s book The Neutrino Hunters makes it clear that the neutrino is, if anything, even more worthy of publicity. The neutrino’s importance is not related to the headline-grabbing technical error of 2011, when it appeared that neutrinos had exceeded the speed of light, but rather because these ubiquitous particles – around 100 trillion of which pass through your body from the Sun every second – give astronomers a unique ability to see into otherwise impenetrable parts of the universe. Moreover, their strange ability to change form shakes up the Standard Model, our best current understanding of fundamental particles.

The hunt for neutrinos began in 1930, when the Austrian physicist Wolfgang Pauli predicted the existence of a neutral particle that is emitted during beta decay. In this nuclear process, an electron is given off as a radioactive atom decays, but there is a hole in the system’s total mass/energy after the reaction. Something has gone missing and Pauli predicted that it was an unknown, ghostly particle. Pauli had mixed feelings about this. As he commented to his friend Walter Baade: “I have done a terrible thing. I have postulated a particle that cannot be detected.”

Pauli’s initial speculation was more than a little brave. At the time there were only three known (apparently) fundamental particles: protons, electrons and photons. To propose another just to explain a little missing energy seemed extreme. Pauli called his hypothetical particle the neutron, but the name was soon taken by James Chadwick to apply to the nuclear particle he had discovered, so Pauli’s conjecture was renamed the neutrino (or “little neutral one”) by the Italian physicist Enrico Fermi.

As the title of The Neutrino Hunters suggests, the people who searched for evidence to back up Pauli’s hypothesis, and later for explanations of the new particle’s behaviour, are the book’s main characters. Some of the searchers are well-known figures, but others are fresh and hence particularly interesting. Consider the Italian physicist Bruno Pontecorvo, who was the first to suggest that neutrinos could be detected. He realized that although neutrinos mostly pass straight through matter, very occasionally an atom will absorb a neutrino and be transmuted to a different element. This new element might then decay, allowing the interaction to be detected. A rare event indeed, but Pontecorvo recognized that nuclear reactors and stars pump out so many neutrinos that such events should be detectable.

Soon after making this proposal, Pontecorvo dropped off the radar, only to emerge years later in the Soviet Union, but others went on to build the giant detectors required to observe neutrino-related decays. From the early vats of cleaning fluid to the science-fiction film-maker’s dream that is the IceCube neutrino observatory – a series of 86 elegant chains penetrating over a kilometre deep into ice near the South Pole – these vast detectors, usually deep underground, are as much the stars of the search as are the humans.

Another important but rarely mentioned figure is the US physicist Frederick Reines, who between 1953 and 1956 managed to pin down actual neutrino events, demonstrating that Pauli’s ghostly particle truly existed. And then there was Ettore Majorana, the most enigmatic of the hunters. His idea that a neutrino could be its own antiparticle may explain why our universe is primarily matter rather than antimatter (and also shakes the Standard Model once more), but his science was overshadowed by his sudden unexplained disappearance during a sea crossing in 1938.

Stylistically, Jayawardhana displays an element of “writing by numbers” in the early chapters. It is common in popular science books to start a chapter with a personal cameo from a key character’s life, but in the opening chapter the anecdote has nothing to do with the topic of the book. References to science fiction are another popular science mainstay, but where it’s obvious that an author such as Michio Kaku employs this tactic because he loves the genre, in this case a reference to Isaac Asimov’s rules of robotics is mishandled, since Jayawardhana attributes them to Asimov’s Foundation series rather than the Susan Calvin/US Robots and Mechanical Men stories where they originated.

Soon, however, Jayawardhana settles down to a solid, readable style. The book’s content, meanwhile, is excellent, piling in plenty of interesting detail and interlacing stories of neutrino detection (and their implications for fundamental physics and cosmology) with entertaining facts. For example, we learn that the physicists building the CUORE observatory at Gran Sasso in Italy needed to shield their experiment with lead to keep out stray radioactivity. But freshly mined lead is itself radioactive, so the team made use of nearly 10 tonnes of lead ingots from the wreck of a ship that sank off Sardinia 2000 years ago.

The only criticism I have of the book’s science is that Jayawardhana sometimes oversimplifies. So, for instance, he mentions that the Japanese Super-Kamiokande detector might be improved by dissolving a little gadolinium in its vast tank of water as this “would enhance the detector’s sensitivity to relic neut-rinos”. What is never explained is why the metal would make any difference to the ability to distinguish the “relic” neutrinos – which are left over from supernova explosions – from other types.

Neutrinos have long been under-represented in popular science writing, with the only serious competition coming from Frank Close’s Neutrino. Close is better on the science, but The Neutrino Hunters gives an excellent picture of the hunt itself and of its implications for physics and cosmology, as we find out more about these fascinating, elusive particles.

  • 2014 Oneworld/2013 Scientific American £11.99pb/$27.00hb 256pp

Web life: electrolights

So what is the site about?

The electrolights blog aims to “explain day-to-day phenomena in simple terms and [show] that physics, though mind-boggling sometimes, is really about the basic things in life”. The blog is aimed at non-experts, especially students, and is written in a straightforward style reminiscent of the Simple English Wikipedia, which is designed to be understood by children, adults who are trying to learn English and people with learning difficulties.

What are some of the topics covered?

Although some posts do indeed focus on the physics of everyday things, such as compact discs, the subject matter of others is decidedly more esoteric, with black holes, double-slit experiments and the theory of relativity all covered. Posts about conventional physics often put a fresh spin on the topic. For example, a May 2013 post about “Weighing the Earth” does a nice job of framing the difference between weight and mass, noting that “to weigh something, anything, means to determine the force being exerted on that thing by another body…You can’t have a body in isolation, in complete isolation, away from any other planet, star or galaxy, in the depths of vacuous space and talk about its weight”. However, it continues, “that isolated body will have a mass, regardless of it being on its own or not”. Another great example is a February 2014 post about addition and subtraction, which reveals that even the most straightforward mathematical operations have unexpected depths.

Who is behind it?

The author of electrolights is Swetam Gungah, a London-based mathematical physicist who has made his career in the financial industry and is currently director of business development at S&P Capital IQ. A staunch advocate of science outreach, Gungah also occasionally writes for the Institute of Physics-supported blog physicsfocus and gives talks around the UK about the importance of physics and other STEM (science, technology, engineering and mathematics) subjects.

Can you give me a sample quote?

From a July 2012 post entitled “Bright”, which begins with the causes of the seasons and then moves on to discuss the challenges of using solar energy: “In a matter of days…the Earth will be at its furthest from the Sun. Yet, in the Northern hemisphere, summer temperatures will be in the mid-20s Celsius. Isn’t it strange that it’s hotter in the Northern hemisphere when the Earth is actually furthest from the Sun? Shouldn’t it be cooler instead? Had things been plain and simple, that’s how you would expect the temperature to vary: the closer one is to the Sun, the warmer one should be. ..But things aren’t that plain – though they can still be simple – and therefore it is the tilt of the Earth that determines the season, not its distance from the Sun. Had the Earth not tilted at 23.5° from the vertical, the seasons would have been pretty much non-existent. A minimal and boring hot/cold variation would have prevailed throughout the year depending on how far the Earth was from the Sun as opposed to how much it was leaning off its axis of rotation.”

Atom-thin sheets are transferred with ease

Picking up a tiny flake of material just one atom thick and placing it with precision onto a substrate is no easy task. But now it has become a bit easier, thanks to researchers at the Kavli Institute of Nanoscience in the Netherlands who have come up with the first all-dry technique for transferring 2D materials such as graphene and molybdenum sulphide.

The new method is said to be quick, efficient and clean, and makes use of viscoelastic stamps. As well as being much simpler than traditional wet-transfer techniques, it could also be used to fabricate freely suspended 2D structures because the samples are not subject to any capillary forces during the process.

2D materials are creating a flurry of interest in labs around the world because they have dramatically different electronic and mechanical properties from their 3D counterparts. This means that they could find use in a host of practical devices such as low-power electronics circuits, low-cost or flexible displays, sensors and even flexible electronics that can be coated onto a variety of surfaces.

Contamination and collapse

The two most famous 2D materials are graphene – a sheet of carbon just one atom thick – and the transition metal dichalcogenides, which include molybdenum disulphide (MoS2). For real-world applications, such materials need to be transferred onto substrates to make “heterostructures” based on the stacking of 2D layers. Most techniques involve wet chemistry, and the chemicals employed often contaminate the 2D materials and adversely affect their pristine electronic and physical properties. Moreover, the capillary forces between the chemicals and the material being transferred can cause the 2D structure to simply collapse.

An all-dry technique could be a solution to these problems, and now a team led by Herre van der Zant and Gary Steele has created just that. The process begins by using the famous “Scotch tape technique”, whereby 2D flakes are peeled off a parent 3D crystal using adhesive film. These layers are then attached to a thin layer of a commercially available viscoelastic material called Gelfilm, which acts as a stamp.

Photograph showing a viscoelastic stamp being peeled off glass

“As the stamp is transparent, we can see the sample through it and can align the flake wherever we want on a 2D substrate surface with sub-micron resolution,” explains team member Andres Castellanos-Gomez. “To transfer the flake, we press the stamp against the sample surface and peel it off very slowly.” The whole process takes just 15 minutes or less.

Climbing toys

The transfer process works thanks to Gelfilm’s viscoelasticity: the material behaves like an elastic solid over short time periods but can flow like a viscous fluid over longer timescales. Such materials are also used to make toys that “climb” down smooth surfaces such as a window by releasing their grip and then re-attaching at a lower point.

The team has already shown that the technique works by transferring graphene flakes onto hexagonal boron nitride (h-BN) – which is a 2D material that is a good substrate for graphene. Using an optical microscope, the researchers were able to confirm that nearly half of the graphene flakes lie flat on the boron nitride without any bubbles or wrinkles.

“Our technique could be applied to any kind of exfoliated layered crystal, so allowing for an infinite combination of material heterostructures,” says Castellanos-Gomez. “For example, as well as depositing graphene on h-BN, we have also already managed to ‘sandwich’ a MoS2 bilayer between two h-BN flakes.”

Beating drumheads

The Kavli team has also succeeded in transferring a single-layer MoS2 crystal onto a silicon oxide/silicon substrate pre-patterned with holes of different diameters. The single-layer MoS2 is freely suspended over the holes, forming “drumheads” – which might be used in mechanical-resonator applications. Indeed, the technique might also be employed to transfer 2D crystals onto pre-fabricated devices with trenches and electrodes.

And that is not all. Since the stamping technique is so gentle, it can be used to deposit 2D crystals onto even the most fragile of substrates. For example, the team says that it has succeeded in transferring few-layer MoS2 crystals onto the cantilever of an atomic force microscope without damaging the cantilever. “We have also transferred 2D materials onto silicon-nitride membranes and holey carbon films, which are typically employed in transmission electron microscopy,” says Castellanos-Gomez.

Details of the new stamping technique can be found in 2D Materials, a new journal from IOP Publishing that publishes its first papers this month.

  • There is much more about how to make graphene in this video filmed at the University of Manchester:

This article first appeared on nanotechweb.org.

A possible dark-matter bubble

By Calla Cofield at the APS April Meeting in Savannah, Georgia

The American Physical Society (APS) April meeting has been taking place in Savannah, Georgia, for the past three days. On Saturday, Tracy Slatyer from MIT spoke to reporters about a paper that she and colleagues posted on arXiv in February, in which they suggest that a mysterious excess of gamma rays surrounding the galactic centre could be explained as dark-matter particle annihilation.

The researchers use what Slatyer calls a “simple” model of dark matter, in which the invisible substance is made up of weakly interacting massive particles (WIMPs) with a mass of about 35 GeV. The model predicts that WIMPs may collide with each other and annihilate, producing gamma rays and either b-quarks or some other mixture of quarks. The Fermi Gamma-ray Space Telescope surveys the entire observable sky for the presence of gamma rays. Fermi’s observation of the plane of the Milky Way revealed that our galaxy is bright with gamma rays, but Fermi has not been able to identify the sources of all those powerful photons. Gamma-ray excesses (more than can be explained by known sources) near the galactic centre have been identified in the Fermi data in the past – most notably at about 130 GeV.

The work presented by Slatyer and her colleagues identifies an excess of gamma rays between 1 and 3 GeV. The researchers say they can see a distinctly spherical, bubble-like collection of photons. Slatyer told reporters that based on the dark-matter model, this spherical shape is not a coincidence – the dark-matter annihilation theory does not hold for a more stretched-out, elongated cluster of gamma rays; nor would it hold if the bubble were not at the centre of the galaxy.

“It looks spherically symmetric and centred on the galactic centre; its overall amplitude and the rate at which it falls off from the galactic centre are consistent with the predictions of dark-matter annihilation,” says Slatyer. “It’s definitely not a statistical fluctuation, and it seems unlikely that it’s a simple mismodelling of the diffuse background because if it were, there would be no reason that it would be spherically symmetric.”

One crucial step in identifying this gamma-ray bubble is making cuts to the Fermi data. The Fermi telescope collects high-energy photons and then reconstructs their path through the universe, ideally providing information about where the photon originated. The Fermi collaboration recently released a new parameter that ranks each photon by the quality or confidence of that reconstruction. Slatyer and her colleagues cut the bottom 50% of the data points based on this parameter, thus preserving only the data with the best path reconstruction. The result is a picture of the galactic plane that appears noticeably sharper than the image containing all of the data points.

Slatyer says that more evidence is needed to confirm that the gamma-ray excess is, in fact, the result of dark-matter annihilation. One additional piece of evidence would be observing the same phenomenon around the centre of another galaxy, most likely one of the dwarf galaxies near the Milky Way. While Slatyer is not a member of the Fermi collaboration, she was part of an independent team that first identified Fermi bubbles, also using the publicly available data from the Fermi telescope.

“I’m wary of saying it is physically impossible to have an astrophysical explanation for this signal. And the cleanest way to get round this is to see the signal somewhere else,” she says. “The dwarf galaxies and places that we expect to be very dark-matter dominated, we expect to see very low background. I think that would be very strong evidence.”

Interferometry tips the scales on antimatter

A new technique for measuring how antimatter falls under gravity has been proposed by researchers in the US. The team says that its device – based on cooling atoms of antimatter and making them interfere – could also help to test Einstein’s equivalence principle with antihydrogen – something that could have far-reaching consequences for cosmology. Finding even the smallest of differences between the behaviour of matter and antimatter could shine a light on why there is more matter than antimatter in the universe today, as well as help us to better understand the nature of the dark universe.

Up or down?

First detected at CERN in 1995, physicists have long wondered how antimatter is affected by gravity – does it fall up or down? Most theoretical and experimental work suggests that gravity probably acts in exactly the same way on antimatter as it does on matter. The problem is that antimatter is difficult to produce and study, meaning that no direct experimental measurements of its behaviour under gravity have been made to date.

One big step forward took place last year, when researchers at the ALPHA experiment at CERN measured how long it takes atoms of antihydrogen – made up of a positron surrounding an antiproton – to reach the edges of a magnetic trap after it is switched off. Although ALPHA did not find any evidence of the antihydrogen responding differently to gravity, the team was able to rule out the possibility that antimatter responds much more strongly to gravity than matter.

Waving matter

Such experiments are hard to carry out, however – antimatter is difficult to produce on a large scale and it annihilates when it comes into contact with regular matter, making it difficult to trap and hold. The new interferometry technique – proposed by Holger Müller and colleagues at the University of California, Berkeley, and Auburn University in Alabama – exploits the fact that a beam of antimatter atoms can, like light, be split, sent along two paths and made to interfere, with the amount of interference depending on the phase shift between the two beams. The researchers say the light-pulse atom interferometer, which they plan to install at the ALPHA experiment, could work not only with almost any type of atom or anti-atom, but also with electrons and protons.

In the proposed interferometer, the matter waves would be split and recombined using pulses of laser light. If an atom interacts with the laser beam, it will receive a “kick” from the momentum of a pair of photons, creating the split, explains Müller. By tuning the laser to the correct pulse energy, this process can be made to happen with a probability of 50%, sending the matter waves along either of the two arms of the interferometer. When the paths join again, the probability of detecting the anti-atom depends on the amplitude of the matter wave, which becomes a function of the phase shift.

Annihilation danger?

Müller adds that the phase shift depends on the acceleration due to gravity (g), the momentum of the photons (and so the magnitude of the kick) and the time interval between each laser pulse. Measuring the phase shift is therefore a way of measuring g, because the momentum and the time interval are both known. The biggest advantage of the technique is that the anti-atoms will not be in danger of annihilating because they will never come close to any mechanical objects, being moved with light and magnetic fields only.

Müller’s idea is to combine two proven technologies: light-pulse atom interferometry and ALPHA’s method of producing antihydrogen using its Penning trap. He points out that the team’s proposed method does not assume availability of a laser resonant with the Lyman-alpha transition in hydrogen, which can be very difficult to build. To make the whole experiment even more efficient, the team has also developed what Müller describes as an “atom recycling method”, which allows the researchers to work with “realistic” atom numbers. “The atom is enclosed inside magnetic fields that prevent it from going away. Thus, an atom that hasn’t been hit by the laser on our first attempt has a chance to get hit later. This way, we can use almost every single atom – a crucial feat at a production rate of one every 15 minutes,” he explains. This would let ALPHA measure the gravitational acceleration of antihydrogen to a precision of 1%.

Precise and accurate

The team plans to build a demo set-up at Berkeley, which will work with regular hydrogen, and hopes to secure funding for this soon. Müller and colleagues are now also part of the APLHA collaboration. “The work at CERN will proceed in several steps,” he says. “The first is an up/down measurement telling [us] whether the antimatter will go up or down,” he says. This will be followed by a measurement of per-cent-level accuracy. Müller’s long-term aim is get to a precision of 10–6, which would be vastly superior to ALPHA’s measurement last year, which has an error bar of 102. ALPHA can currently trap and hold atoms at the rate of four each hour, but thanks to recent upgrades at its source of antiprotons – the ELENA ring – CERN could theoretically produce nearly 3000 atoms per month. In addition to ALPHA, the GBAR and AEgIS collaborations are also planning to measure gravity’s effects on antimatter.

While Müller agrees that the gravitational behaviour of antimatter can be studied from experiments with normal matter, a direct observation is essential, and that is what Müller, the ALPHA collaboration and the other teams at CERN are keen to accomplish in the near future. “No matter how sound one’s theory, there is no substitute in science for a direct observation,” he says.

The research is published in Physical Review Letters.

Copyright © 2026 by IOP Publishing Ltd and individual contributors