Skip to main content

Nanotube sensors promise smarter performance tracking

Sensors made from carbon nanotubes could offer a superior alternative to current activity tracking technology. Devices developed at the University of California, San Diego, are flexible enough to bend within layers of fabric, and can track specific types of motion as well as recording vital signs such as temperature and heart rate. These developments are promising for real-time monitoring of patients living at home, through telemedicine and those working in extreme environments, like astronauts in space.

PhD student Long Wang and Professor Kenneth J Loh, researchers in the UCSD’s department of structural engineering, produced the flexible sensors using carbon nanotubes (CNTs) and fabric with an ingenious, cost-effective manufacturing process (Smart Mater. Struct. 26 055018). An ink consisting of CNTs and latex is mixed together and sprayed onto glass, where it is then annealed to produce a freestanding thin-film network of nanotubes. Once two electrodes have been attached, the film is then sandwiched between two layers of fabric and ironed together to produce a flexible fabric sensor.

These multipurpose sensors not only record heart rate, but they can also accurately track motion in a finger and even monitor respiration. When attached to a finger, the flexible nature of the thin film allows the sensor to bend and flex as the finger moves. This change in shape alters the electrical resistance across the film, which makes it possible to determine the bending angle.

Similarly, if strapped around the torso, the film changes in shape as the chest expands during inhalation, subsequently changing the resistance. When the person breathes out, the film returns to its original shape and the resistance drops again.

The sensor can also be used to monitor skin temperature. As the film is heated, there is again a change in the electrical resistance. These changes can be calibrated using measurements taken with a thermocouple, which allows the resistance change to be converted to the temperature of the skin.

Sensors like this are useful beyond the realms of logging the stats of your run or cycle ride. It’s still early days, but Wang and Loh have successfully combined an easy fabrication route with the versatility needed to take a number of important measurements. Their results show that the technology is on track to be used extensively for easy, non-invasive, real-time monitoring of individuals in a variety of environments.

Flash Physics: How to 3D print glass, more neutrons from laser fusion, cash for UK tokamak

Tiny glass castles and pretzels made by 3D printing

Researchers in Germany have created tiny glass pretzels and castles using 3D printing. Although glass plays a vital role in our day-to-day lives, it is notoriously difficult to shape. Making large glass objects requires high temperatures for melting and casting, and microscopic features involve the use of hazardous chemicals. As a result, modern manufacturing methods such as 3D printing have not been used for glass – but now a team at the Karlsruhe Institute of Technology (KIT) in Germany have found a solution that uses a readily available printing setup. By mixing a curable monomer with silicon dioxide powder, Bastian Rapp and colleagues created a nanocomposite mixture dubbed “liquid glass”, which becomes solid under ultraviolet (UV) light. The team used the liquid glass as “ink” in a stereolithography 3D printer – a standard setup that uses laser light (in this case UV light) to solidify the printed structure. The resulting 3D composite was then heated to 1300 °C to convert it into fused silica glass. Using this method, Rapp and team were able to create smooth and transparent structures with micron-sized features, including a microfluidic chip, a honeycomb structure, a castle and a pretzel. Coloured glass could also be created by simply incorporating metal salts into the liquid glass ink. As glass has a number of useful properties – including optical transparency, thermal and electrical insulation and chemical resistance – it is an important material in industry and scientific research. Therefore, being able to easily create macro- and micro-structures through modern techniques opens up possible new manufacturing routes and materials. The method is presented in Nature.

Laser fusion produces more neutrons

Simulation of laser light interacting with the hohlraum

Physicists in China have unveiled a new way of creating neutrons by firing a powerful laser at a hydrogen target. Capable of producing 100 times more neutrons than current laser techniques, the method has been developed by Jie Liu of the Institute of Applied Physics and Computational Mathematics in Beijing and colleagues. It involves using a laser pulse to heat and compress a capsule (or hohlraum) containing deuterium. The intense heat causes pairs of deuterium nuclei to fuse in a process that gives off neutrons. Called inertial confinement fusion, the technique has already been investigated as a potential source of neutrons. However, the inherent instability of the process has meant that previous schemes were inefficient or unreliable sources of the particles. Liu and colleagues improved the stability of the process by using a new scheme called spherically convergent plasma fusion. This uses a spherical hohlraum with a thin gold wall that is coated on the inside with polystyrene containing deuterium. The laser pulse drives the deuterium to the centre of the hohlraum, where fusion occurs. Using a 6.3 kJ laser pulse with a duration of about 2 ns, the team was able to produce around one billion neutrons per pulse – about 100 times more than previous methods could achieve. Writing in Physical Review Letters, Liu and colleagues point out that using a target containing deuterium and tritium could boost the neutron output by an additional factor of 1000 – and even more if a higher-power laser is used.

Cash boost for UK’s MAST tokamak

the MAST tokamak

The Mega Amp Spherical Tokamak (MAST) at the Culham Centre for Fusion Energy (CCFE) in Oxfordshire has received £21m for a series of upgrades to study the best way to extract waste fuel from the plasma it contains. MAST has a spherical plasma, shaped much like a cored-out apple, whereas a conventional tokamak such as ITER has a doughnut-shaped plasma. A spherical tokamak allows for a much more compact – and cheaper – device and it is hoped that this kind of tokamak could one day be used as a potential fusion reactor. MAST is nearing the end of a £45m upgrade that will see the tokamak given a new “divertor”, which extracts the waste fuel from fusion. Called “Super-X”, it is hoped that the new divertor could even be used in a future demonstration fusion plant – dubbed DEMO. The new cash, from the European Fusion Research Consortium and the UK’s Engineering and Physical Sciences Research Council, will be used to increase the tokamak’s plasma heating power as well as upgrade the plasma control systems and add extra plasma diagnosis equipment. These improvements are set to be introduced over the next five years.

 

  • You can find all our daily Flash Physics posts in the website’s news section, as well as on Twitter and Facebook using #FlashPhysics. Tune in to physicsworld.com later today to read today’s extensive news story on the origin of binary black holes.

The long road to ignition

In an alternative universe, the quest for fusion energy passed a major milestone sometime in the closing months of 2012. That was the year scientists at the US National Ignition Facility (NIF) conducted a high-profile campaign to achieve the facility’s central purpose: ignition, a controlled nuclear fusion reaction that produces more energy than is required to sustain it. Reaching this point would have capped decades of research on inertial confinement fusion (ICF), which uses lasers or magnetic fields to heat and compress nuclear fuel to high temperatures and pressures. Ignition would also have been something of a vindication for NIF itself, banishing memories of the construction delays and cost overruns that plagued the multibillion-dollar laser before it opened at the Lawrence Livermore National Laboratory (LLNL) in 2009.

Early in 2012, leaders of NIF’s so-called “National Ignition Campaign” felt they had reasons to be optimistic. Computer models predicted that at laser energies of around 1.7 MJ, the tiny capsule of deuterium-tritium fuel at the heart of the NIF target chamber would begin releasing neutrons and alpha particles in unprecedented numbers, ushering in the conditions required for ignition. NIF was designed to deliver 1.8 MJ. Surely, ignition was just around the corner.

It wasn’t. In the real universe, the National Ignition Campaign ended with a whimper, not a bang. The reasons for the failure were manifold. In mid-2016 the National Nuclear Security Administration (NNSA) published a review describing them in detail. Computer codes and models predicting high energy gain from the fuel capsules were “not capturing the necessary physics”, the review’s authors wrote. Experimental efforts were “frustrated by the inability to distinguish key differences” between laser shots, with similar set-ups producing scattered results. Most damningly, the review cited a “failed approach to scientific program management” based on “circumvent[ing] problems rather than understanding and addressing them directly”. As the report’s authors concluded, “The question is if the NIF will be able to reach ignition in its current configuration and not when it will occur.”

Omar Hurricane, chief scientist in NIF’s inertial confinement fusion programme, is frank in his assessment of the failed campaign. “It was kind of like trying to swing for a home run and missing,” he says. A soft-spoken, pragmatic physicist who was working elsewhere within LLNL at the time, Hurricane explains that success would have required “an incredible amount of control” over many different parameters.

Chain of events

Consider the chain of events that must happen before ignition can occur at NIF. During a shot, the combined energy of the facility’s 192 laser beams is directed onto a hollow target about one centimetre in height and a few millimetres in diameter. This target, known as a hohlraum, contains helium gas, and at its centre is a tiny capsule filled with deuterium–tritium fuel. As the lasers heat the hohlraum, the gold coating on its interior surface begins to give off X-rays. These X-rays bathe the fuel capsule with radiation, heating it and causing material on the outside of the capsule to rocket off at speeds of hundreds of kilometres per second. Momentum conservation then forces the rest of the capsule to implode, and if the density and temperature of the imploding capsule become high enough the deuterium and tritium nuclei will fuse. “The idea for ignition is that we can actually get a propagating burn wave in the target – ‘lighting a match’ that can burn spherically outward and release more fusion energy than we put in to get it started,” NIF director Mark Herrmann explains.

The key to making that happen – and the reason it hasn’t yet – can be summed up in a single word: symmetry. The laser field outside the hohlraum is not perfectly symmetric. Neither is the X-ray field produced inside. Wherever such asymmetries exist, the pressure applied to the fuel capsule is slightly different. “You can think of it as like trying to squeeze a soccer ball down to something the size of a pea,” Herrmann says. “If you squeeze harder on one side or another you get a lima bean, or a string bean. You don’t get a pea.”

Photo showing a hohlraum – a tiny gold-coated cylinder with a thin tube extending from the curved side – being grasped in a person's fingertips and held up to their eye. The eye forms the background to the photo and is much larger than the hohlraum

One notable cause of asymmetric “squeezing” is a phenomenon known as cross-beam energy transfer (CBET). When laser beams overlap in the presence of a plasma, one beam can, in effect, “steal” energy from another. “You think you’ve designed the laser illumination pattern to give you a symmetric drive, and the beams kind of conspire to change that for you,” explains Warren Garbett, a plasma physicist at the UK’s Atomic Weapons Establishment who has served on review panels for ICF research. As well as creating asymmetries, CBET can also divert energy from an incoming beam to an outgoing one. Such processes are “very difficult to predict, and also very difficult to control”, says Jerry Chittenden, a plasma physicist at Imperial College London and a co-author on the NNSA report. “It’s a bit like in the film Ghostbusters: bad things happen when the beams cross.”

The behaviour of fuel capsules under pressure is also tricky to control. As weak spots in the capsule begin to give way, material flows away from them. This distorts the capsule and can even tear it to pieces before fusion has a chance to get started. The process that turns small fluid asymmetries into big ones is known as a Rayleigh–Taylor instability, and Hurricane notes that its relevance has been understood since the early days of ICF research. What wasn’t understood, he adds with a wry smile, was just how much of a barrier it would prove.

Seeking symmetry

Since the end of the ignition campaign, research at NIF has shifted away from trying to achieve ignition directly. Instead, the focus is on understanding sources of asymmetry in capsule implosions and finding ways to minimize them. One key development was to put slightly more energy into the beginning, or “foot”, of the laser pulse. The resulting implosions are less prone to Rayleigh–Taylor instabilities, and in 2014 the “high-foot” campaign reported a notable success: more energy was produced through fusion than was applied to the fuel. “This is open to interpretation, but the results seem to indicate that the ignition process is at least starting,” Chittenden says. In some experiments, he points out that the pressure at the hottest point in the deuterium–tritium fuel came within a factor of two of the value required for ignition. Unfortunately, heating the capsule more at the beginning of the pulse makes it harder to compress later, so in effect, the high-foot strategy trades greater implosion symmetry for reduced maximum pressure. Maximizing the energy yield from a shot will therefore require a change of tactics.

As the high-foot design and other modifications have improved the symmetry of capsule implosions, other causes of asymmetries have emerged. One culprit is the gossamer-fine membrane that holds the fuel capsule in place inside the hohlraum. Recent experiments have shown that as X-rays hit this tent-like structure, it explodes, creating a pulse of pressure at the point where material strikes the fuel capsule. Another source of asymmetry is the tiny glass tube used to fill the capsule with fuel. Although the tube is only about 10 µm in diameter, Hurricane says they can still see its effects in their data. Fixing these problems will likely involve a combination of engineering and physics, with engineers working to make the tent and fill tube less intrusive while physicists search for ways of compensating for their effects. “It’s a systematic process,” Hurricane says. “We’re digging deeper and deeper into the implosion. Every time we do that, we see an improvement, but there’s also the chance of seeing yet another problem. We’re just trying to knock them off one at a time.”

Other labs, other prospects

Between now and 2020, the plan is for researchers at NIF to map out the effects of different laser pulse shapes; different densities of helium gas inside the hohlraum; and new materials for both hohlraum and fuel capsule. The goal, as defined in the NNSA’s latest framework for inertial confinement fusion research, is “to determine the efficacy of NIF to achieve ignition and, if this is found to be improbable, to understand the reasons why”.

If the answer turns out to be negative, one possible way forward would be to reconfigure NIF so that its lasers heat fuel capsules directly, rather than via a hohlraum. Cutting out the intermediate step would increase the energy applied to the capsule by a factor of 10, albeit at the probable cost of increased CBET and other laser-related sources of instability. Experiments on this “laser-driven direct drive” method are currently under way both at NIF and at smaller-scale laser facilities such as the University of Rochester’s Laboratory for Laser Energetics (LLE). Converting NIF to direct drive is “something we’re considering”, Hurricane says, but he adds that doing so would require new funding, since NIF’s lasers are not designed to produce a spherically symmetric laser field. Also, with current technology, results at the LLE suggest that direct-drive fusion experiments at a NIF-scale facility would come within a factor of two of the pressures required for ignition. This, as Chittenden points out, is similar to what NIF can do already.

Meanwhile, scientists in France are watching NIF’s progress with interest. The design of the Laser Mégajoule (LMJ) facility near Bordeaux is similar to NIF’s – Jean-Luc Miquel, programme leader for laser-plasma experiments at the LMJ, calls them “brother lasers” – and it will be capable of similar energies once it is fully operational. Currently, only 16 of the LMJ’s planned 176 beams are in use, with an additional 40 scheduled to come online by 2019. The final completion date will be sometime at the beginning of the next decade, with government funding and the needs of the French nuclear-weapons programme dictating the pace of progress. (Both the LMJ and NIF perform experiments on weapons physics in addition to ICF research, and the French programme draws little distinction between the two.) Currently, work at the LMJ is focused on lower-energy studies of hohlraum energetics, radiation transport and hydrodynamic instabilities, with more topics to be added as additional beams become available. “For us, ignition is the ultimate goal of the LMJ, but there is a lot of physics that can be addressed before then,” Miquel says.

At other laboratories, ignition is even further away. A variant of ICF that uses magnetic fields to compress fuel is being pursued at the Sandia National Laboratory in New Mexico, but reaching ignition via this method would probably require a next-generation facility. An alternative laser-based approach, known as “fast ignition”, is being studied at the LLE and at Osaka University in Japan, with larger facilities planned that would determine whether promising results at low energies translate into ignition at high energies. Plans for even bigger lasers, on the 3–4 MJ scale, have also been put forward in both Russia and China. So far, however, these facilities exist only on paper.

For the moment, NIF remains the world’s best hope for ignition, and while the fevered expectations of 2012 have dissipated like a vaporized hohlraum, there is plenty of optimism left within the ICF community. “Looking at the history of ICF, quite often facilities have been built expecting to get ignition, and then more physics has been discovered, and they find out they are some way away,” Garbett says. “But NIF is really on the cusp.” In his own calm, deliberate way, Hurricane is also upbeat. “The fact that we can actually see problems that we have a chance of fixing gives me hope that we can make progress,” he says. “That’s a much better situation to be in than ‘It’s still not working, we can’t see anything wrong, and we don’t know what to do about it.’ It’s not going to be easy, it’s going to be a lot of work, and we’re going to have to fight for every bit of progress. But scientists like solving problems – it’s part of the job.”

Space, time and spooky action

Photo of Albert Einstein and Neils Bohr, strolling down the street

Albert Einstein’s persistent opposition to quantum mechanics is a familiar, if still somewhat surprising, fact to all physicists. It was first voiced in 1926 in his famous comment written in a letter to Max Born that “Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not bring us any closer to the secrets of the ‘old one’. I, at any rate, am convinced that He is not playing dice.”

From then, until Einstein’s death in 1955 – as he struggled without success to find a unified theory of electromagnetism and gravitation – his opposition never wavered, and made him an increasingly isolated figure in physics. “To Einstein, probabilities were just a sign of gaps in our understanding,” David Bodanis concisely observes in his latest book – Einstein’s Greatest Mistake: the Life of a Flawed Genius – the title of which refers to this opposition.

Less established are the reasons, both intellectual and personal, for Einstein’s resistance. According to Bodanis, they lie in the history of the cosmological constant, unwillingly introduced by Einstein in 1917 into his 1915 field equations of general relativity. Added as a fudge factor with a repulsive effect to balance the attractive effect of matter, the cosmological constant was meant to produce a static solution for the universe: a concept that in 1917 seemed evidently correct to astronomers. When subsequent observations of galaxies by Edwin Hubble and Milton Humason proved that the universe is actually expanding, Einstein willingly abandoned the cosmological constant around 1931 and reverted to his original field equations. He even, apparently, referred to the cosmological constant as “the greatest blunder of my life” (a comment quoted by Bodanis without reference to its somewhat doubtful source). But as a result of his volte-face, says Bodanis, Einstein became increasingly convinced of the superiority of his intuition over experiment – a view that, by the 1930s, hardened into dogmatic opposition to quantum mechanics.

Telling support for this stance, oddly unmentioned by Bodanis, comes from an Einstein lecture, “On the method of theoretical physics”, delivered at the University of Oxford in 1933, not long before he emigrated to the US. Here Einstein controversially stressed the importance of mathematics over experiment in devising physical theories by saying that “Experience can of course guide us in our choice of serviceable mathematical concepts, [but] it cannot possibly be the source from which they are derived; experience of course remains the sole criterion of the serviceability of a mathematical construction for physics, but the truly creative principle resides in mathematics. In a certain sense, therefore, I hold it to be true that pure thought is competent to comprehend the real, as the ancients dreamed.”

Nobel laureate Steven Weinberg would appear to agree with Bodanis. In “Einstein’s search for unification”, an essay Weinberg contributed to my book, Einstein: a Hundred Years of Relativity, he concludes that because general relativity had been guided by an existing mathematical formalism – the Riemann theory of curved space – perhaps Einstein had acquired “too great a respect for the power of pure mathematics to inspire physical theory. The oracle of mathematics that had served Einstein so well when he was young betrayed him in his later years”.

The most original aspect of Bodanis’ book is its attempt to explain difficult concepts in ordinary language, without, of course, resorting to mathematics. For instance, Bodanis compares curved space in general relativity to two Finnish skaters who head for the North Pole, using compasses to carefully skate in parallel, but are inevitably “pulled” together until they crash into one other at the pole. He also pictures Heisenberg’s understanding of uncertainty at the subatomic level as the experience of an audience at a 1920s Berlin operetta. The audience can work out general patterns among the actors from the type of clothes they change into for each act, without knowing exactly what the actors are doing backstage. “Heisenberg would have been convinced that what had happened backstage was inherently a blur,” suggests Bodanis. Whereas from Einstein’s perspective, “each individual actor had to be changing his or her costume”.

Less original, though also engagingly integrated with the book’s physics, are its biographical elements. These cover not only Einstein but also others such as his second wife Elsa Löwenthal, his lifelong friend Michele Besso and his sparring partner Niels Bohr. His undergraduate physics teacher in Zürich, Heinrich Weber, who Einstein rightly regarded as well behind the scientific times, told him “You are a smart boy, Einstein, a very smart boy. But you have one great fault: you do not let yourself be told anything.” For much of Einstein’s life, this self-confidence was without question a vital strength, but in his later years, argues Bodanis, it became a handicap.

Yet, as the essentially respectful Bodanis admits, even Einstein’s opposition to quantum mechanics could be fruitful. His 1935 so-called EPR paper, “Can the quantum-mechanical description be considered complete?”, written with Boris Podolsky and Nathan Rosen (neither of whom is named by Bodanis), provoked a fellow-sceptic, Erwin Schrödinger, to come up with the technical term “entanglement” and his tantalizing “cat” paradox.

Schrödinger, unlike Einstein, eventually accepted quantum mechanics as a profoundly useful method of calculation. However, the debates about its correct physical interpretation launched by the great, if flawed, Einstein, are very far indeed from being conclusively resolved. “What is quantum theory, a century after its birth?” asks Carlo Rovelli in his recent book Reality Is Not What It Seems: the Journey to Quantum Gravity. “An extraordinary dive deep into the nature of reality? A blunder that works, by chance? Part of an incomplete puzzle? Or a clue to something profound regarding the structure of the world, which we have yet to fully decipher?”

  • 2016 Little, Brown 304pp £20.00hb £14.99pb

Super Earth ‘is best place to look for signs of life’

An exoplanet orbiting a nearby red-dwarf star may be the “best place to look for signs of life beyond the solar system”, according to the team of astronomers that has discovered the rocky world. The exoplanet is called LHS 1140b and is located just 39 light-years from Earth. It has a density that suggests that it has a rocky surface with iron core. LHS 1140b is also in the habitable zone of its star and the astronomers say that it could have an atmosphere.

The new exoplanet orbits LHS 1140, which is a faint red dwarf that is much smaller and cooler than the Sun. LHS 1140b was discovered by a team led by Jason Dittmann of the Harvard-Smithsonian Center for Astrophysics. The researchers used the MEarth facility in Arizona to detect dips in the starlight from LHS 1140, which occur when the exoplanet passes between the star and Earth. Then the European Southern Observatory’s HARPS instrument was used to measure the mass and density of LHS 1140b.

The exoplanet takes 25 days to orbit its star and despite being 10 times closer to LHS 1140 than Earth is to the Sun, it only receives about half the “sunlight” that Earth does. Despite this lack of stellar energy, LHS 1140b is in the habitable zone of its star, which means that it could support life.

Magma oceans

When red-dwarf stars are young they are known to emit radiation that would damage the atmosphere of an exoplanet. However, Dittmann and colleagues believe that LHS 1140b is large enough to have sustained a magma ocean on its surface for millions of years. Such an ocean could have fed steam into the atmosphere, replenishing its stock of water and ensuring that its atmosphere survived the early radiation bombardment.

“The present conditions of the red dwarf are particularly favourable – LHS 1140 spins more slowly and emits less high-energy radiation than other similar low-mass stars,” says team member Nicola Astudillo-Defru from Geneva Observatory, Switzerland.

Exciting exoplanet

Dittmann adds: “This is the most exciting exoplanet I’ve seen in the past decade. We could hardly hope for a better target to perform one of the biggest quests in science – searching for evidence of life beyond Earth.”

Astronomers will now use the Hubble Space Telescope to try to work out how much life-destroying radiation is currently being showered upon LHS 1140b. Studies using future instruments such as the Extremely Large Telescope could shed further light on the exoplanet’s atmosphere and its capacity to sustain life. LHS 1140b is described in Nature.

Flash Physics: Metallic space fabric, Standard Model deviation at LHCb, accelerator milestone at European XFEL

Metallic “space fabric” is made by 3D printing

NASA scientists have developed a chainmail-like “space fabric” using 3D printing. Raul Polit-Casillas at NASA’s Jet Propulsion Laboratory (JPL) in the US and colleagues design advanced metallic woven textiles for applications in space, such as spacecraft shielding or astronaut suits. The latest prototype comprises small silver squares strung together, creating a flexible fabric reminiscent of chain mail. To create it, Polit-Casillas and team used additive manufacturing – more commonly known as 3D printing. The technique involves depositing layers of material to build up the desired object. Yet rather than just creating the fabric’s shape, the researchers were able to also incorporate function during printing. “We call it ‘4D printing’,” explains Polit-Casillas. “If 20th century manufacturing was driven by mass production, then this is the mass production of functions.” Consequently, the fabric is reflective on one side but absorbs light on the other, providing a means of passive heat management. It also remains strong despite being flexible and foldable. As well as using such fabrics in space, the researchers hope astronauts in the future will be able to manufacture a range of functional materials while in space. “Astronauts might be able to print materials as they’re needed – and even recycle old materials, breaking them down and reusing them,” says Polit-Casillas. “Conservation is critical when you’re trapped in space with just the resources you take with you.”

Has LHCb spotted a deviation from the Standard Model?

Photograph of LHCb

A possible deviation from the Standard Model of particle physics has been seen in a study of how B0 mesons decay in the LHCb experiment on the Large Hadron Collider (LHC) at CERN. LHCb physicists looked at how the B0 decays to a K* meson via two different processes – one involving the production of a muon and an antimuon, and the other the production of an electron and a positron. The Standard Model – specifically, the concept of lepton universality – predicts that both of these processes should occur with roughly equal frequencies. However, new analysis of data acquired by LHCb during the first run of the LHC in 2011–2012 suggests that muon/antimuon production is less likely to occur than electron/positron production – with a statistical confidence of 2.5σ. In 2014, LHCb physicists published a similar test of lepton universality in the decay of the B+ meson. They also found that muon/antimuon production is less likely to occur than electron/positron production – with a statistical confidence of 2.6σ. While these observations are far off the 5σ required for a “discovery” in particle physics, LHCb researchers hope that analysis of data taken in the second run of the LHC will push the result above the discovery threshold. The findings were presented yesterday at CERN by Simone Bifani of the University of Birmingham, and the slides and a video are available. “We have the potential to make the first observation of physics beyond the Standard Model at the LHC,” says Bifani. Tim Gershon of the University of Warwick adds: “The mood is one of cautious excitement – no one is popping any champagne corks yet.”

Accelerator milestone for European X-ray Free Electron Laser

Photograph of the European XFEL beam line

Engineers working on the European X-ray Free Electron Laser (European XFEL) in Hamburg, Germany, have managed to send electrons down the facility’s 2.1 km-long superconducting linear accelerator. The commissioning of the superconducting linear accelerator, which is the world’s largest, is a major step towards the completion of the facility. Engineers will now spend the next month increasing the energy of the electron beam before passing them through “undulators” where they produce coherent X-ray beams. When fully complete later this year, the European XFEL will generate pulses of X-rays 27,000 times per second with each pulse lasting less than 100 fs (10–13 s), allowing researchers to create “movies” of processes such as chemical bonding and vibrational energy flow across materials.

 

  • You can find all our daily Flash Physics posts in the website’s news section, as well as on Twitter and Facebook using #FlashPhysics. Tune in to physicsworld.com later today to read today’s extensive news story on a nearby exoplanet.

On-chip nanowire laser delivers on data

Semiconductor nanowire lasers are promising ultracompact light sources for miniaturized optical processing and sensing, but their efficiency is limited by the difficulty of confining light in a structure much smaller than its wavelength. By using a silicon photonic crystal to trap light in a semiconductor nanowire, researchers at NTT Basic Research Laboratories in Japan have now turned the chip itself to their advantage. They have shown that a photonic crystal/nanowire hybrid can sustain telecom-band lasing stable enough to transmit a high-frequency data signal (APL Photonics 2 46106), and believe that the platform’s advantages for component integration could enable them to build an on-chip photonic network.

Light confinement, which is crucial to laser oscillation, becomes less effective when the nanowire diameter is smaller than half the light wavelength. Masato Takiguchi and colleagues needed lasing at infrared wavelengths (here 1342 nm), but wanted to keep the nanowire diameter less than a tenth of that for compact integration (around 100 nm). In most nanowire lasers to date, which have operated around the sub-micron visible wavelength range (400–700 nm), the confinement challenge has not been so extreme.

The researchers tackled the problem using the hybrid cavity they first presented in 2014 (Nat. Mater. 13 279), in which a single InAsP/InP nanowire is carefully placed in a groove on a silicon photonic crystal. Such photonic crystals contain periodic holes that slow down and trap light, guiding it into the nanowire and enabling lasing in the infrared. This is only possible because these longer wavelengths match the transparency window of silicon – which also explains why the 1260–1625 nm band is the mainstay of current silica optical fibre communications.

To achieve high-frequency data transmission, the NTT researchers needed to show that they could sustain stable continuous-wave lasing, which is crucial for subsequent modulation to represent binary information. When pumped with another laser in a pseudorandom bit sequence, tests showed that the team’s nanolaser responds fast enough to transmit 1s and 0s that could be distinguished at 10 billion bits per second, a typical fibre-optic communication speed.

Unlike the pulsed lasing demonstrated by the NTT research group earlier this year (ACS Photonics 4 355), continuous-wave lasing requires relentless dumping of power into the small nanowire volume, worsening the effects of problematic heating. To minimize this, they kept measurements at temperatures as low as 4 K. They also employed single-photon sensitive techniques from their earlier work, instead of conventional telecom signal detection, to combat low signal amplitude. Future work will improve laser gain and heat management by the photonic chip, with first author Takiguchi commenting to AIP News that they’ll “aim for room-temperature current-driven lasing as well”.

The photonic crystal platform offers exciting advantages for coupling other components to the nanolaser. Having proven its data-transmission capabilities, the next target is connecting the nanowire to input/output waveguides, en route to an on-chip photonic network. “We want to demonstrate that we’re able to integrate a number of photonic devices by having different functionalities on a single chip,” concludes Takiguchi.

How hurricanes replenish their vast supply of rainwater

The mystery of how tropical cyclones deliver colossal amounts of rainwater over long periods of time may have been solved by an international team of atmospheric physicists. The team suggests that – rather than relying on ongoing evaporation to replenish rainwater – these powerful storm systems suck pre-existing moisture out of the air through which they travel.

Tropical cyclones – or hurricanes, as they are called in the northern hemisphere – are capable of delivering huge amounts of rain that can do more damage than the high winds associated with the storms. The mean precipitation from a typical Atlantic hurricane, for example, lies at around 2 mm/h – and this rate can be sustained for days on end. What is puzzling about this, however, is that it is considerably faster than the typical rate of tropical oceanic evaporation. This means that a hurricane’s moisture stocks must be replenished from something other than ongoing evaporation, otherwise a typical storm would run dry within a day.

Imported moisture

Traditionally, studies of the water budget of tropical cyclones have been focused only on the area within 400 km of the storm’s centre – the part of a cyclone thought to receive the majority of the ocean-derived heat that powers it. In this region, the local evaporation of water from the sea can only account for around 10–20% of the total rainfall. So, it has been supposed, the additional moisture must be being imported from further out, up to 2000 km from the eye of the storm – and well beyond the area of the storm in which rain falls. The exact mechanism that could import water vapour like this has not been clear. Pressure gradients more than a few hundred kilometres from the storm’s centre, for example, are inadequate to drive outlying moist air towards the centre.

To investigate further, physicist Anastassia Makarieva of the Petersburg Nuclear Physics Institute in Russia and colleagues looked at the moisture dynamics of north Atlantic hurricanes out to 3000 km from their centre. First, the researchers considered the radial pressure distribution, relative humidity and temperature of the hurricane boundary layer, and calculated that – even at their wider scale of interest – the storm’s rainfall cannot be supported by evaporation alone.

Dry footprint

Next, the team examined North Atlantic atmospheric moisture and rainfall data from 1998 to 2015 recorded by the Tropical Rainfall Measuring Mission satellite and NASA’s Modern Era Retrospective Re-Analysis for Research and Applications programme. By comparing conditions during hurricanes with the surrounding hurricane-free periods, the researchers were able to show that hurricanes leave in their wake a “dry footprint”, in which rainfall is suppressed by up to 40%.

Given this – and the failure of evaporation to adequately explain how hurricanes refuel – the researchers propose instead that hurricanes gobble up pre-existing moisture stocks from the atmosphere as they move, with the rain potential of the hurricane being directly proportional to the storm’s velocity relative to the surrounding air flow.

“Hurricanes must move to sustain themselves,” Makarieva says, concluding: “Hence, how they move and consume the pre-existing atmospheric water vapour is key to predicting their intensity.” The researchers propose that – rather than being driven by heat extracted from the ocean – hurricanes are instead powered by releasing the potential energy of the water vapour previously accumulated in the atmosphere that they pass through.

Moisture-robbing winds

They suggest this could explain why tropical cyclones do not occur in regions like the Brazilian coast where there are persistent, landward winds that remove water vapour from over the ocean – thereby robbing the potential storms of their drive and fuel source.

Patrick Fitzpatrick, a geoscientist from the Mississippi State University who was not involved in this study, comments: “Quantitative precipitation forecasting of tropical cyclones still lacks skill, and is worthy of research since these storms’ inland flash flooding is a major cause of casualties and property damage”. He believes that further investigation of these new climate budget implications – considering the local upstream conditions and storm motion – are needed.

Kevin Trenberth – a meteorologist at the US National Center for Atmospheric Research – is sceptical, however, suggesting the researchers are too idealistic in their view of hurricanes, treating them as symmetrical and two dimensional, and overlooking their size variability and spiral arm bands that bring moisture into the storm from about four times the radius of the rainfall area. The mismatch between evaporation and precipitation rates for anything greater than light rain has already been established, he says, adding: “It is correct that the moisture has to come from somewhere, and movement of the storm helps, but that does not help storms that move slowly or not at all.”

With their initial study complete, the researchers are now working to describe how hurricanes might develop over time by the condensation of water vapour.

The research is described in the journal Atmospheric Research.

Squashed quantum dots solve a multi-faceted problem

Quantum dots have revolutionized the field of optoelectronics due to their atom-like electronic structure. However, the prospect of colloidal quantum-dot lasers has long been deemed impractical due to the high energies required to induce optical gain. But recent work published in Nature and led by Ted Sargent of the University of Toronto shows that the lasing threshold in cadmium selenide (CdSe)/cadmium sulphide (CdS) core-shell colloidal quantum dots can be lowered by squashing the CdSe core via a clever ligand exchange process.

To instigate optical gain in a semiconductor laser, the difference between the lowest electron level and the highest hole level must be wider than the band gap so that the light emitted when they recombine can stimulate emission in neighbouring nanocrystals. Colloidal quantum dots (CQDs) then, should make ideal candidates for lasing applications, as their atom-like electronic structure means that the electron and hole energy levels are easier to separate.

In practice, however, the energies required to trigger optical gain in CQDs are so high that they can heat up to the point of burning. While electrons tend to occupy one energy state upon excitation, the hole that they leave behind in the valence band can populate one of eight closely spaced states. This degeneracy pushes the hole Fermi level into the band gap and increases the amount of energy required to instigate optical gain.

To overcome this issue, the researchers took advantage of the fact that CdS imposes a strain on CdSe due to a slight lattice mismatch of 3.9%. By growing an asymmetric CdS shell around a “squashed” oblate CdSe core, they were able to induce a biaxial strain that affected the heavy and light holes of the valence band to different extents, thus lifting the degeneracy.

To produce these asymmetric CQDs the group invented a technique called facet-selective epitaxy, making use of ligands that interact differently with the surfaces of CdSe. One of these ligands, trioctylphosphine sulphide, or TOPS, binds weakly to the (0001) facet of CdSe and not at all to the (0001), while octanethiol interacts similarly with all CdSe surfaces. Therefore, by growing CdS on the (0001) facet with TOPS and then replacing with octanethiol to stimulate epitaxial growth, oblate-shaped CQDs could be made throughout the entire particle ensemble with remarkable uniformity.

The resulting lasers had an unprecedentedly high performance, exhibiting a low lasing threshold of 6.4–8.4 kW cm–2, a seven-fold reduction compared with previous attempts. They also emitted light over a narrow energy range of just 36 meV. Both of these properties can be attributed to the enhanced splitting of the valence band levels that arises due to the oblate CQD shape.

The international team of researchers has certainly proved that continuous-wave CQD lasers are possible, yet there are still some obstacles to be overcome before they are seen on the market. Most importantly, the next step will be exciting the CQDs via electrical rather than optical means, as in standard commercial lasers. Nevertheless, facet-selective epitaxy opens up a whole host of other CQD materials for lasing applications and beyond.

Flash Physics: Drawing water from dry air, material glows under stress, physicist bags economics award

Using the Sun to extract water from dry air

A new solar-powered system that can extract water from air in arid regions of the world has been unveiled by researchers in the US and Saudi Arabia. Led by Omar Yaghi of the University of California, Berkeley, and Evelyn Wang of the Massachusetts Institute of Technology, the team created the device using a metal-organic framework (MOF). The device is powered by heat from sunlight. It can harvest 2.8 litres of liquid water per kilogram of MOF per day at relative humidity levels of 20–30% – which are common in arid regions of the world. MOFs combine metals with organic molecules to create rigid, porous structures that are ideal for storing gases and liquids. The system comprises a kilogram of compressed MOF crystals that sits below a solar absorber and above a condenser plate. Ambient air diffuses through the porous MOF, where water molecules preferentially attach to the interior surfaces. Sunlight heats up the MOF and drives the bound water toward the condenser, where the vapour condenses and drips into a collector. “This work offers a new way to harvest water from air that does not require high relative humidity conditions and is much more energy efficient than other existing technologies,” says Wang. Yaghi adds: “There is a lot of potential for scaling up the amount of water that is being harvested. It is just a matter of further engineering now.” The system is described in Science.

Material glows in response to stress

Photograph of mechanophore-containing polymer under UV light as it is stretched

A material that repeatedly lights up in response to mechanical forces has been developed by researchers at Okinawa Institute of Science and Technology Graduate University in Japan. To create the material, Georgy Filonenko and Julia Khusnutdinova incorporated stress-sensing molecules called photoluminescent mechanophores into the common polymer, polyurethane. While mechanophores are not new, they are typically one-use only. They emit light when a strong force breaks a specific chemical bond between atoms or pulls apart two molecular patterns. The radical change in structure causes a shift in the wavelength of light emitted (the glow), but it is difficult to return the molecule to its original, “off” state. Therefore, Filonenko and Khusnutdinova developed a molecule that relies upon dynamic rather than structural changes. Their phosphorescent copper complexes move rapidly when the host polymer is in a relaxed state, and the motion suppresses light emission. Yet when a mechanical force is applied, the movement of the polymer chains, and hence the mechanophores, slows, and consequently the complexes are able to luminesce. The light emitted is visible to the naked eye when the material is bathed in UV light and becomes brighter with increasing force. However, unlike previous stress-reacting materials, Filonenko and Khusnutdinova’s can revert to its original, non-luminescent state as no chemical bonds have been broken. The new mechanophores are described in Advanced Materials and could be used to assess stress and dynamics in soft materials.

Physicist bags prestigious economics award

The physicist-turned-economist Dave Donaldson has won the John Bates Clark Medal of the American Economic Association (AEA). The medal is given to “economist under the age of 40 who is judged to have made the most significant contribution to economic thought and knowledge”. Described by the AEA as “the most exciting economist in the area of empirical trade,” Donaldson has studied topics as diverse as the economic impact of railways in 19th century India and the consequences of climate change on agricultural markets. Donaldson, 38, is associate professor of Economics at Stanford University in California and is a dual citizen of Canada and the UK. He studied physics at the University of Oxford before doing an MSc and PhD in economics at the London School of Economics.

 

  • You can find all our daily Flash Physics posts in the website’s news section, as well as on Twitter and Facebook using #FlashPhysics. Tune in to physicsworld.com later today to read today’s extensive news story on rainfall during tropical storms.
Copyright © 2025 by IOP Publishing Ltd and individual contributors