Patterned graphene should be an ideal material for creating infrared topological plasmons, according to calculations by physicists in the US and China. Dafei Jin, Thomas Christensen and colleagues at the University of California, Berkeley, Massachusetts Institute of Technology and the Institute of Physics of the Chinese Academy of Sciences came to this conclusion after calculating the electronic properties of graphene – a sheet of carbon just one atom thick – patterned with a triangular lattice of circular holes (see figure).
Plasmons are particle-like collective oscillations of conduction electrons that can be created when light is shone on a material. The team’s calculations suggest that when a magnetic field is applied perpendicular to the patterned graphene sheet, plasmons created by infrared light are able to propagate in one direction along the edge of the sheet. However, the plasmons do not propagate into the interior of the sheet – and this behaviour is a hallmark of a topological material.
Writing in Physical Review Letters, the team suggests that infrared topological plasmons could be useful for creating practical devices that combine optics with ultrafast electronics.
If any physicist couples out there are struggling to find a first-dance song for their wedding, CERN has just come up with the perfect solution. US communications manager Sarah Charley teamed up with grad students Jess Heilman and Tom Perry to produce a particle-physics parody of Howie Day’s song “Collide”. Day came across their music video on Twitter and asked to visit CERN – “I figured it was a long shot, but why not?” The project spiralled from there, leading to Day re-recording the song and filming a new video that features him playing guitar in the LHC tunnel and CERN scientists dancing in their labs.
Neutral particles play a crucial role in the creation of mysterious jets of plasma called spicules that burst from the surface of the Sun. Computer simulations done by researchers in the US and Norway suggest that an interplay between neutral particles and plasma in the Sun’s atmosphere allows tangled magnetic fields to launch the jets.
The middle layer of the Sun’s atmosphere – the chromosphere – is permeated by about 10 million spicules at any given time. These jets travel at speeds of 50–150 km/s and reach lengths of 10,000 km before collapsing. Rather than pointing straight out of the Sun, they tend to flop back towards the surface – giving the chromosphere the appearance of a lawn in need of cutting. Spicules could be providing hot plasma to the Sun’s outer atmosphere – the corona – and a better understanding of the jets could help solve the long-standing puzzle of why the corona is millions of degrees hotter than the surface of the Sun.
However, understanding what drives the emergence of spicules has been a difficult task. They are tricky to observe because they move very quickly, with each jet lasting only 5–10 min. This means that it has been difficult to improve computer simulations of spicules by comparing them to observations of the real thing. Indeed, scientists have been working on one particular computer model of the chromosphere for 10 years without being able to simulate the emergence of spicules.
Missing ingredient
Now, Juan Martínez-Sykora and colleagues at the Lockheed Martin Solar and Astrophysics Laboratory and the University of Oslo have found a missing ingredient that appears to have been holding back the success of the computer model – neutral particles.
Solar physicists believe that spicules are created when tangled magnetic fields from within the Sun emerge into the chromosphere and straighten out like a snapping whip. Previous models had not been able to reproduce this behaviour as Martínez-Sykora explains: “Usually magnetic fields are tightly coupled to charged particles. With only charged particles in the model, the magnetic fields were stuck, and couldn’t rise beyond the Sun’s surface. When we added neutrals, the magnetic fields could move more freely.”
Previous simulations had ignored these neutral particles because it is computationally very expensive to include them. Indeed, the team’s new version of the computer model that includes neutral particles took a year to run on NASA’s Pleiades supercomputer.
Worth the wait
The long wait was worth it because the model was able to simulate spicules for the first time. Furthermore, the output of the model was a close match to spicules observed by NASA’s Interface Region Imaging Spectrograph space telescope and the Swedish 1-m Solar Telescope in the Canary Islands.
The simulation also revealed that the snapping magnetic fields create Alfvén waves. These are strong magnetic waves that physicists believe are responsible for heating the Sun’s atmosphere and driving the solar wind of charged particles towards Earth.
In the early 1960s the physicist John Bell dreamt up one of the most profound experimental tests ever imagined. While on sabbatical in the US on leave from CERN, he had been contemplating the weirdness of quantum mechanics, which predicts some especially strange outcomes in experiments with entangled particles. In an intuitive world, faraway events can’t influence each other faster than the speed of light (what is known as “locality”) and properties of objects have a definite value even if we don’t measure them (what is known as “realism”). However, quantum theory makes different predictions from those one would expect from this “local realism”, and Bell devised a form of experiment, now known as a Bell test, to check whether these theoretical implications translate to the real world.
For half a century, Bell tests showed that local realism doesn’t hold up in the real world – something even the most senior of quantum physicists still struggle to grasp. But there remained two well-known loopholes in the tests that allowed us to hang on to the idea that the tests were flawed, and that the world does, after all, “make sense”. Now, thanks to work by three separate research groups published in 2015, those loopholes have been closed, and the death of local realism is generally accepted.
However, some physicists are suggesting that there could be some even more obscure loopholes at play. The question therefore is: might local realism still be alive and kicking?
The quantum cake factory
Quantum mechanics is famed among students, the public and academics alike for concepts that are difficult to get one’s head around. Locality and realism are some of the worst offenders, as is the related concept of entanglement. Explaining entanglement to students and non-physicists usually needs quantum equations, knowledge of things such as photon polarizations, and abstract proofs that even graduate students find boring. So it was that at a conference one summer in the late 1990s, physicists Paul Kwiat and Lucien Hardy came up with a real-world analogy to explain the weirdness of entanglement without any maths, calling it “the mystery of the quantum cakes”.
1 The mystery of the quantum cakes
(Courtesy: IOP Publishing)
Lucy and Ricardo explore nonlocal correlations through quantum mechanically (non-maximally) entangled cakes. Because Ricardo’s first cake (far right) rose early, Lucy’s cake (far left) tastes good. Redrawn from American Journal of Physics68 33 with the permission of the American Association of Physics Teachers.
Here’s the story as Kwiat, who is now my graduate adviser, told it to me. Imagine a bakery producing cakes for sale, and Lucy and Ricardo are inspectors testing the finished product. The bakery, shown in figure 1, is unusual because it has a kitchen with two doors, one on the left and one on the right, from which emerge conveyor belts (like the moving sidewalks at an airport). Cakes are sent out on the conveyor belts in little ovens, and they finish baking as they travel to Lucy (on the left) and Ricardo (on the right). The cakes are sent out in pairs, so Lucy and Ricardo always get one at the same time.
There are two tests that Lucy and Ricardo can do on the cakes. They can open the oven while the cake is still baking to see if it has risen early or not. Or they can wait until it finishes baking and sample it to see if it tastes good. They can only do one of these tests on each cake – if they wait until it finishes baking to taste it, they lose the chance to check whether it rose early, and if they check partway through baking to see if it has risen early, they disturb the cake (maybe it’s a soufflé) and they can’t test whether it tastes good later. (These two mutually exclusive tests are an example of “non-commuting measurements”, an important concept in quantum mechanics.)
Lucy and Ricardo each flip coins to randomly choose which test to do for each of their cakes. After testing cakes all morning, they then get together to compare their results. Because of the coin flips, sometimes they happened to do the same test on a pair of cakes and sometimes different tests. When they happened to do different tests, they notice a correlation: if Lucy’s cake tasted good then Ricardo’s always rose early, and vice versa. This isn’t so strange – maybe the cakes are made from the same batter, and maybe batter that rises early always tastes good. Now, in the cases where they both happened to check the cake early, Lucy and Ricardo find that in 9% of those tests, both cakes had risen early. So how often should both cakes taste good, when they both waited to taste them? (Go on, try to work it out.)
The answer is at least 9% of the time, right? We know that when one cake rises early, the other always tastes good, so as they both rise early 9% of the time, both cakes should taste good at least as often as they both rise early. However, Lucy and Ricardo are surprised to find that both cakes never taste good. This seems impossible – and it is, for normal cakes – but if the pairs of cakes were in a particular entangled quantum state, it could happen! Of course, physicists can’t really make entangled cakes (well, not yet), but they can make entangled photons and other particles with the same strange behaviour.
So why did we make the wrong prediction about how often both cakes must taste good? We assumed that random choices and outcomes on Lucy’s side shouldn’t affect what happens on Ricardo’s side, and vice versa, and that whether the cakes will taste good or rise early was already determined when they were put in the ovens. These seemingly obvious assumptions are together called local realism: the idea that all properties of a cake or a photon have a definite value even if we don’t measure them (realism), and the assumption that faraway events can’t influence each other, at least not faster than the speed of light (locality). In a local realistic world, both cakes have to taste good at least 9% of the time – nothing else makes sense. Observing fewer than 9% (or none at all) is evidence that at least one of the assumptions of local realism must be false.
This imaginary quantum bakery is a version of a Bell test – an experiment that can check whether or not we live in a local realistic world. (Some physicists, notably Einstein, had already realized that entanglement seemed to defy local realism, but it was long thought to be a philosophical question about the interpretation of quantum theory rather than something to be tested in the lab.) In the half-century since Bell’s discovery that local realism can be tested, the experiment he proposed has been carried out in dozens of labs around the world using entangled particles, most commonly photons.
First to success Bas Hensen and Ronald Hanson from Delft University of Technology adjusting their Bell test set-up. (Courtesy: Frank Auperle/TU Delft)
Photons don’t taste good or rise early, so instead physicists usually measure some other property, such as their polarization in two different measurement bases (horizontal/vertical and diagonal/anti-diagonal, for example). Like the two cake tests, these polarization measurements are “non-commuting”. Using a particular entangled quantum state and measurement directions, the “quantum cakes” experiment has actually been performed in the lab and found precisely the same percentages as the story. Bell tests can use other entangled states, and there are many different mathematical conditions for violating local realism, but the idea is the same. With some relatively simple optics equipment, undergraduates at the University of Illinois, US, can even do a Bell test in one afternoon for their modern physics lab.
Closing loopholes
Prior to 2015, every Bell test ever carried out was imperfect. Physicists weren’t able to rule out every “loophole” that could allow local realism to still be true even though the experimental results seem to violate it.
The first loophole can appear if not every photon or cake is measured. In the quantum cakes story, we implied that every single pair of cakes was tested. In an experiment with photons, this is never true, because there are no perfect single-photon detectors, and some fraction of the photons is always lost. This can open a loophole for local realism: if enough photons are not tested, then maybe the ones we missed would have changed the outcome of the experiment. (In the quantum cakes analogy, maybe the cakes come down the conveyor belt too fast to test all of them, so some of the pairs that were not tested might both taste good and Lucy and Ricardo wouldn’t know.) This is called the “detection loophole”, and to close it the experimenters must ensure they collect both entangled photons most of the time. In a common version of a Bell test, the minimum is two thirds.
A second important loophole appears if some kind of signal could travel between different parts of the experiment to create the measured correlations, without transmitting information faster than the speed of light. Long distances and quick measurements are the keys to closing this “timing” loophole. In the quantum cakes example, imagine that vibrations are transmitted down the conveyer belt, so that whoever opens their oven first to taste their cake (which might taste good) always causes the other cake to collapse and taste bad. Then both cakes would never be found to taste good, without actually violating local realism. To avoid this, Lucy and Ricardo should be far enough apart that no signal could travel between them and influence their measurements, even at the speed of light. In special relativity this condition is called “space-like” separation. To rule out the possibility that the chef making the cakes could somehow influence the measurements, the two testers and the bakery itself should be space-like separated as well.
While both of these loopholes had been closed in separate experiments, closing them both in the same experiment was a challenge that remained unresolved for many decades. To successfully close the loopholes, experimentalists would need innovative experimental designs and equipment – including optical components with very low loss, fast random number generators and measurement switches, and high-efficiency single-photon detectors – and careful arrangement of the experiment in space and time. In 2015 three different groups in three countries successfully carried out loophole-free Bell tests for the first time: a team led by Ronald Hanson at Delft University of Technology in the Netherlands was first (Nature526 682), followed by teams led by Krister Shalm at the National Institute of Standards and Technology (NIST) in Boulder, Colorado, US (Phys. Rev. Lett.115 250402), and Anton Zeilinger at the University of Vienna, Austria (Phys. Rev. Lett.115 250401).
The Delft team’s experiment used nitrogen vacancy centres, which are defects in diamond crystals that contain an isolated electron. The electron has spin, a quantum property that can point up, down or in a quantum superposition of the two. The electron can be made to emit a photon that is entangled with its spin direction. Using two of these nitrogen vacancy centres and combining the emitted photons with a beam splitter, the Delft team transferred this photon–spin entanglement to spin–spin entanglement between the two electrons. The spins of the two electrons could then be measured along two different directions (analogous to the two different types of cake tests or two different polarization measurements), and this process was repeated many times to carry out a Bell test. The two diamond crystals were placed in different buildings on the Delft campus, separated by about 1.3 km, so the entanglement creation and the spin measurements could be space-like separated. One advantage of this design is that the researchers were able to successfully measure the electron spins each time they were entangled, eliminating the detection loophole altogether. However, successfully entangling the two spins was difficult, and this made the experiment slow – over 18 days the researchers recorded only 245 spin measurements, still enough to violate the limit of local realism by two standard deviations.
The NIST and Vienna teams took a different approach, both using entangled photons, and a cake-factory-like design in which entangled pairs are produced and sent to two different measurement devices. To close the timing loophole, the measurements had to be far from the entangled pair source – more than 100 m at NIST and about 30 m in Vienna. (Finding suitable lab space was challenging – the Vienna experiment took place in the empty basement of a 13th-century palace.) The measurements also had to be chosen and carried out quickly, using ultrafast random number generators and polarization switches.
Solid violations Scientist Marissa Giustina, of the University of Vienna, installs superconducting detectors in the “Alice” cryostat. The “Bob” cryostat in the opposite measurement station can be seen in the distance, about 60 m away. (Courtesy: L Lammerhuber/Austrian Academy of Sciences)
Advanced superconducting single-photon detectors were also critical to both experiments. The Vienna team used transition edge sensors, which use a thin piece of tungsten cooled to about 100 mK to detect photons. At this temperature, tungsten sits on the edge of its transition to superconductivity, hovering between normal resistance and the drop to zero resistance as it becomes superconducting. Any tiny amount of energy deposited by a single photon will cause a sudden and relatively large change in the resistivity of the metal. The resulting change in the electrical current through the detector is measured with a superconducting quantum interference device (SQUID) amplifier. Transition-edge-sensor detectors can be up to 98% efficient, a big improvement over other detectors such as single-photon avalanche diodes, but the low temperatures and special electronics required make them large, expensive and sensitive to noise (one researcher found he could only use them at night, because they picked up interference from cell phones in the busy classrooms below the lab). The NIST team used superconducting nanowire single-photon detectors, which are slightly less efficient than transition-edge-sensor detectors, but can be used at higher temperatures and are faster and less noisy. Both the NIST and Vienna loophole-free Bell tests found solid violations of local realism, with results 7–11 standard deviations from the expected limit.
Is local realism dead?
There is one possibility that may be impossible to truly eliminate: what if the outcomes of all the measurements were determined before the entangled particles were created?
Most physicists agree that these three experiments eliminated the most important loopholes, providing solid proof that local realism is dead. Since the first (imperfect) Bell tests in the 1980s, few people ever expected that a loophole-free test would give any other result, but the experiments of 2015 overcame remarkable technical challenges to put any doubts to rest. Loophole-free Bell tests also have some possible applications, including certifying the security of quantum cryptography systems even if the two parties can’t trust their own equipment, and verifying the independence of quantum random numbers. (NIST has plans to generate secure random numbers live and make them freely available online.)
But there is one possibility that, however unlikely, may be impossible to truly eliminate: what if the outcomes of all the measurements were determined before the entangled particles were created, or before the experiment even began, or before the experimenters were even born? If that were the case, local realism could still be law even though we seem to observe violations in Bell tests. At some point in the lifetime of the universe all the atoms and particles that make up the entangled photon sources, random number generators and measurement devices would have had a chance to “communicate”, no matter how far apart they are placed during the experiment (and indeed, according to the Big Bang model, all the matter in the universe was once in the same place at the same time). No-one has proposed exactly how this “cosmic conspiracy” would work, but it would not be forbidden by physics as we know it, as long as no information were transmitted faster than light.
One approach to this challenge is to try to narrow down how recently the parts of a Bell test experiment could have interacted. An experiment carried out earlier this year by the same Vienna group tried to do this by using light from two distant stars to choose the type of measurement on each photon in a Bell test (Phys. Rev. Lett.118 060401). The idea is that the two stars, which are separated by hundreds of light-years, could not have exchanged information any more recently than the time it would take light to travel between them, placing a limit on how far any cosmic conspiracy must extend backwards in time. (Random fluctuations in the colour of the starlight were used as “coin flips” to decide which measurements to do on each pair of entangled photons.) In a Bell test using these random settings, the team did find a violation of local realism, and concluded that any pre-determined correlations must have been generated more than 600 years in the past. In principle, future experiments could use light from distant quasars to push this limit back millions or billions of years. These “cosmic” Bell tests are impressive experimental achievements, but they are still unable to eliminate the possibility that the local electronics used to measure the stellar photons – which could have communicated in the much more recent past – could produce correlations, which may limit their usefulness. Ultimately, these conspiracy-minded loopholes may have to be abandoned as fundamentally untestable.
Does the world look different, post-loopholes? Physicists have had decades to come to terms with the probable death of local realism, but it still seems like an obvious truth in daily life. That even unasked questions should have answers, and unmade measurements should have outcomes, is an unconscious assumption we make all the time. We do it whenever we talk about what would have happened – like “When both cakes were found to rise early, they would have tasted good,” which was key to our flawed reasoning about the quantum bakery – or even “If it didn’t rain today I would have been on time for work.” Local realistic thinking leads to wrong answers in quantum experiments. But entangled particles don’t often appear in everyday life, so outside the lab – if we choose – we’re probably OK to keep up the illusion of local realism.
The first 3D view of a supernova remnant has been assembled using 12 years worth of data from NASA’s Chandra X-ray observatory. A supernova remnant is what is left over after a star explodes. As the ejected matter expands outwards into the interstellar medium, bounded by a shockwave, the remnants often exhibit asymmetries in their motion and shape. In an attempt to understand the mechanisms involved in shaping remnants, Brian Williams at the Space Telescope Science Institute in the US and colleagues studied the Tycho supernova remnant using X-ray data. First observed in 1572, and named after astronomer Tycho Brahe, Tycho is a type Ia supernova thought to result from the destabilization of a white dwarf in a binary system as its partner star transferred mass to it. Now, Tycho appears to be a roughly circular cloud of clumpy matter, but it is understood its shockwave has twice the velocity on one side than the other. To investigate the asymmetry, the team focussed on 57 “tufts” of silicon-rich ejecta in Tycho. Using the Chandra observations, they were able to measure the tufts’ velocities and therefore build a full 3D map of their motion. Unlike the shockwave, the ejecta shows no asymmetry, suggesting the explosion itself was symmetrical. A possible explanation is that the shockwave is affected by density gradients in the interstellar medium while the ejected matter is not. Williams and colleagues also attempt to address Tycho’s clumpy nature, questioning whether the ejecta started out as clumpy or began as smooth and then clumped together during expansion. Their simulations, however, demonstrate that neither option can be ruled out at the moment. The work is presented in The Astrophysical Journal.
Cosmic glycerol is made here on Earth
Photomontage showing a ball and stick representation of glycerol. (Courtesy: Harold Linnartz)
Glycerol has been made for the first time in a laboratory that simulates the conditions in dark interstellar clouds. Gleb Fedoseev, Harold Linnartz and colleagues at the University of Leiden in the Netherlands made the compound by firing hydrogen atoms at carbon-monoxide ice at low pressure and at a chilly 23 K. Glycerol is an essential component of cell membranes in living creatures and it is possible that life on Earth – and perhaps other planets – emerged because this and other life-related molecules can be delivered to planetary surfaces by comets. The compound comprises 14 atoms and is the largest made so far under interstellar conditions. In 2009, the team made formaldehyde (four atoms) and methanol with (six atoms). Then in 2015, they made the eight-atom sugar glycolaldehyde. Linnartz explains that successively larger molecules are created by having the smaller molecules interact with each other. “We now have reached the level of glycerol, two levels higher and we have ribose, a sugar that is important in the coding of our genes,” he says. While formaldehyde, methanol and glycolaldehyde have already been discovered in interstellar clouds, astronomers have yet to spot glycerol in space. The Leiden team now plans to use the ALMA radio telescope in Chile to look for signs of the compound. The research is described in The Astrophysical Journal.
A “momentum microscope” that can fully characterize a quantum many-body system has been unveiled by physicists in Australia. The device was demonstrated by measuring correlations between ultracold atoms and could provide insights into tricky many-body problems, such as high-temperature superconductvity.
A many-body quantum system containing a lot of particles can be fully characterized by measuring all correlations between particles in the system. While this is extremely difficult to do in practice, a very good characterization can sometimes be achieved by using a specific set of correlations between just a few particles.
One million atoms
Sean Hodgman of the Australian National University and colleagues have achieved such a characterization by colliding two Bose–Einstein condensates (BECs), which are ensembles of ultracold atoms all in one quantum state. In this experiment, about one million helium atoms were used to make both BECs.
After the BECs collide, the team measures the momenta of atoms by tracking their positions as a function of time. This information is then used to calculate the correlations between the momenta of pairs and triplets of atoms in the halo of atoms created by the collision.
Pairing field
The measurements also allowed the team to calculate the “pairing-field amplitude” for the system – which they describe as a key building block for working out the higher-order correlations in the system. Indeed, the team has calculated that the measurements provide enough information to fully characterize the halo as a many-body system.
The technique, which is described in Physical Review Letters, could provide insights into poorly understood highly correlated systems such as high-temperature superconductors. It could also help physicists understand exotic phenomena such as many-body localization, glassy dynamics and Efimov resonances.
Beginning of dialog window. Escape will cancel and close the window.
End of dialog window.
This is a modal window. This modal can be closed by pressing the Escape key or activating the close button.
Kate Wyness is studying for a PhD, researching nuclear “sludge”. She’s working on a probe that could delve down into it, characterize it and help find solutions to the challenges of dealing with nuclear waste at Sellafield nuclear plant in the UK. This short film for Physics World’s Faces of Physics series reveals what life is like for Kate during her PhD programme at the University of Bristol. The experimental work Kate is engaged in brings opportunities to develop practical skills, but also brings challenges given the complexity and highly-regulated nature of the nuclear industry.
”A nuclear-powered PhD” has been produced by Andrew Glester and Ben Cowburn, who document Kate’s experiences inside the lab as part of a team of applied researchers. The film also shows what life is like for an early career researcher living in Bristol, a city with a strong maritime history, which has developed as a strong cultural hub in recent times. Kate explains how she was drawn to the city’s sense of community – a factor that Kate also values in her approach to physics.
Physics World’s Faces of Physics series is a collection of short films about the lives of people working in physics, exploring their motivations and the impact of their work. By telling personal stories, we hope to show that physics is an ordinary activity that can lead to an extraordinary array of careers. Earlier films in the series profiled an engineer working in green energy, a physics teacher in New York and a Mexican astronomer with a passion for photography.
To find out more about the social side of physics, take a look at the March 2016 issue of Physics World, a special edition about diversity issues in physics. Find out how to access that issue here.
The first section was about imaging techniques, including structured illumination from Suliana Manley in bacteria, and full automation of super resolution/single molecule microscopy from Masahiro Ueda. Read all about it here.
Mechanobiology was the next one I covered; Pakorn Kanchanawong showcased his new lab’s work on the nanoarchitecture of focal adhesions. His new work is all about linking function to nanoscale changes in the make-up of the things. Also presented was work using DNA based tension sensors, and how mechanics work in the regulation of the bacterial flagellar.
Jumping ahead to the final day, we covered new sensors for metal ions (30% of proteins require metal ions to function) in live cells from Amy Palmer, as well as an acid resistant fluorescent protein compatible with super resolution from the Takeharu Nagai lab: keep your eyes peeled for rsGamillus in the coming months.
My last post of the meeting covered more from the Nagai lab—some amazing work on singularities—events in cells or cell populations that originate in a single place and spread to form a group decision. His examples of spiral wave cAMP signalling in social amoeba are mesmerising. Finally, Sua Myong gave us insight into the possible reason that microRNAs have mismatches between strands, and how they reduce efficiency in the swap from DICER to RISC.
Thanks all, for more, and to hear from the other contributors, simply go here.
The original version of this blog was posted on KCL Science
A distant dead galaxy observed by NASA’s Hubble Space Telescope has astronomers questioning their understanding of how massive galaxies form and evolve. Until now, it was assumed that dead galaxies – those that no longer produce stars – in the early universe are elliptical and maintain that shape as they evolve. Meanwhile, disc-shaped, spiral galaxies usually contain young stars and undergo star formation. But, galaxy MACS 2129-1 calls this theory into question. MACS 2129-1 is a fast-spinning, disc-shaped galaxy three times as massive as the Milky Way but half the size, and it stopped forming stars a few billion years after the Big Bang. The finding surprised Sune Toft from the University of Copenhagen in Denmark and colleagues, as it indicated that some of the earliest dead galaxies must somehow evolve from Milky Way-like discs to giant elliptical galaxies, changing not just their structure, but also the motion of their stars. Toft suggests this probably happens through mergers. “If these galaxies grow through merging with minor companions, and these minor companions come in large numbers and from all sorts of different angles onto the galaxy, this would eventually randomize the orbits of stars in the galaxies,” explains Toft. “You could also imagine major mergers. This would definitely also destroy the ordered motion of the stars.” The study is presented in Nature and the researchers hope that the upcoming James Webb Space Telescope will provide further insights.
Physicists rupture a photon dam
The optical equivalent of water surging through a ruptured dam has been created by physicists in France and Italy. When intense light travels through a medium such as an optical fibre, the light can modify the optical properties of the medium. This can create an effective interaction between photons in the fibre, causing them to behave like molecules in a fluid. Now, Gang Xu and colleagues at the University of Lille and Stefano Trillo at the University of Ferrara have used this effect to mimic what happens when a dam suddenly breaks and water is allowed to flow freely through the breach – a well-studied phenomenon in fluid mechanics. Their experiment begins with a continuous wave of laser light flowing through a fibre, which represents the flow over the dam before it breaks. The team then increases the laser power sharply in about 25 ps to simulate the surge of water that occurs after a dam is burst. Careful monitoring of the light emerging from the fibre reveals characteristic shock waves, which are also seen in dam breaks. If the jump in power is above a certain threshold, the troughs in the shockwaves are so low that they contain no light at all – something that is not seen in water-dam breaks. Writing in Physical Review Letters, the team says that its set-up could be used to study other fluid-like behaviours of light including the emergence of rogue waves.
UK government unveils Space Industry Bill
Surrey Satellite Technology Limited is a leader in the UK’s successful space industry. (Courtesy: SSTL)
The UK government will introduce a Space Industry Bill in the current session of parliament. The Conservative government says that the purpose of the bill is to “boost the economy, British business, engineering and science by making the UK the most attractive place in Europe for commercial spaceflight”. A key aim of the bill is to allow space missions to be launched from UK soil. This is not currently possible because the country has no regulatory framework that covers operational insurance, indemnity and liability associated with spaceflight. The bill proposes new government powers to license and regulate commercial spaceflight including rockets, spaceplanes, satellites and spaceports. New security powers to protect spaceflight from unauthorized access and interference are also included in the bill. The UK space industry has enjoyed 8% annual growth over the past decade and is currently worth about £13.7 billion – much of that coming from the production of small satellites. Today, British companies have about 6.5% of the global space market and the government hopes to boost this to 10% by 2030.
Metal chalcogenides like MoTex have been used in various electronics devices because of their intrinsic semiconducting properties, since the exact nature of their electronic structure can be tuned from semiconducting to metallic based on their atomic arrangement. Scientists from the University of Texas at Dallas have discovered a way of inducing this electronic transition simply by heating the 2H-MoTe2 semiconducting phase in a vacuum to produce metallic Mo6Te6 nanowires. This simple synthesis technique could help the development of a new generation of tunable semiconductor devices.
Semiconducting materials are used in most electronic devices, and the ability to tune their properties through simple techniques is the holy grail of material synthesis. By identifying a new, conducting nanowire phase of this material family, lead researchers Robert M Wallace and Moon J Kim have helped integrate these materials into the fabrication of MoTex-based electronics.
Making the nanowires
Wallace and Kim observed the phase transition from layered 2H-MoTe2 to the Mo6Te6 nanowire (NW) phase after heating 2H-MoTe2 to approximately 450 °C in vacuum. The new NWs were stable at room temperature and after reheating because the ratio of Te/Mo significantly decreases to accommodate the phase transition. Other synthetic techniques induce the 2H (semiconducting) to 1T’ (metallic) transition under much higher temperatures, typically above 900 °C. However, since such high temperatures were not used in this study, no 1T’ phase was observed. The metallic character was therefore attributed to the new NW phase.
The researchers observed the nanowire phase transition by scanning transmission electron microscopy (STEM), providing beautiful images and videos of the reaction propagation in the crystals. The phase change appears to initiate at the surface of the 2H-MoTe2 crystal where Te desorption is highest, and can easily be modified with annealing time or temperature. This transition is atomically sharp with a well-defined interface between the two phases. The researchers characterized this new phase showing the 1D chain of Mo-Te NWs consisting of infinitely staggered Mo3Te3 units.
Looking at the electronic structure
Interestingly, scanning tunnelling microscopy (STM) and X-ray photoelectron spectroscopy (XPS) showed that the NW bundles were metallic in character, but density functional theory (DFT) simulations showed that may not be the case for an isolated NW. When the NWs exist in bundles, the conduction band is partially occupied, creating a band gap equal to zero. However, when the NWs are isolated the calculated band gap is approximately 0.3 eV, showing the importance of the surrounding electronic structure in the crystal. The reported NWs from this synthesis procedure exist solely in bundled clusters and were therefore only observed as a metallic conductor.
The researchers clearly demonstrated the phase transition from the layered 2H-MoTe2 phase to Mo6Te6 nanowire bundles preferentially forming at the surface of the crystal. They also showed that the NWs could have both a semiconducting and metallic electronic structure based on the intrinsic conduction network that exists both in a single wire and in bundled wires.
Ever since the discovery of 2D graphene and 1D carbon nanotubes, low-dimensional materials have inspired innovation and stretched the frontiers of various fields of materials science. Metal chalcogenides are on a similar path, with first the introduction of the 2D layered metallic 1T’ phase and now the Mo6Te6 nanowires. Clearly, this work will have significant impact on future material design.