Skip to main content

Vibrations drive X-ray flares in Jupiter’s aurora

The mechanisms behind the energetic X-ray flares in Jupiter’s version of the Northern Lights are remarkably similar to those that produce Earth’s aurora, an international team of astronomers has discovered. Using simultaneous observations by two different satellites, researchers co-led by William Dunn at University College London and Zhonghua Yao at the Chinese Academy of Sciences determined that both processes are driven by vibrations in planetary magnetic fields – a phenomenon that could be universal even among planets that have very different magnetospheres.

As volcanic gases erupt from Jupiter’s innermost moon, Io, the heavy sulphur and oxygen ions they contain form a donut-shaped ring of plasma around the planet. From there, these particles gradually move along Jupiter’s magnetic field lines to fill its magnetosphere. Eventually, some of the ions strike Jupiter’s polar atmosphere. The large amount of energy they deposit produces spectacular, highly energetic X-ray bursts in Jupiter’s aurora every 27 minutes.

Since X-rays are typically only generated in far more extreme environments such as black holes or neutron stars, these bursts (and their clockwork-like regularity) have intrigued astronomers ever since they were discovered 40 years ago. Despite widespread interest, however, the exact mechanisms that drive such regular X-ray pulses have remained a mystery.

Simultaneous observations

In their study, Dunn, Yao and colleagues examined the pulses using simultaneous observations from two different spacecraft: NASA’s Juno satellite, which orbits Jupiter and takes in situ measurements of its magnetosphere; and ESA’s XMM-Newton observatory, which monitors the planet remotely from its orbit around the Sun. By analysing 26 hours of observations from each instrument, the astronomers determined that Jupiter’s X-rays are driven by periodic vibrations in the planet’s strong, rapidly-rotating magnetic field.

Although the source of these vibrations is still unknown, the team’s observations revealed that they transfer energy to the heavy ions emitted by Io – essentially allowing these charged particles to “surf” along Jupiter’s magnetic field lines. This generates regular waves of energetic plasma, which in turn produce energetic X-ray flares as they impact the planet’s atmosphere.

Remarkably, Dunn and Yao’s team discovered that although this specific process appears to be unique to Jupiter, it is strikingly similar to mechanisms that occur in Earth’s magnetic field, where far more subtle field-line vibrations generate less energetic plasma waves. This means that despite the several-orders-of-magnitude differences in the time, space, and energy scales of the two planetary systems’ flare mechanisms, they ultimately have a common source.

By extension, the astronomers suggest that this mechanism could be universal across many different planetary environments. They now hope to capture similar mechanisms playing out in the magnetospheres of Saturn, Uranus, and Neptune, and perhaps even giant exoplanets.

The research is described in Science Advances.

‘Second sound’ appears in germanium

Researchers in Spain and Italy have observed “second sound” in a room-temperature semiconductor for the first time. This phenomenon, which occurs when distinct waves of temperature pass through a material, had previously only been observed in exotic superfluids at ultracold temperatures (and, more recently, in graphite). Its surprise appearance in a material widely used in electronic chips could make it possible to improve the performance of electric devices by managing waste heat better.

Second sound is not sound as we generally think of it. It gets its name because in mathematical terms, the thermal waves moving through a material resemble the pressure waves that create sound in air. In physics terms, these waves are fluctuations in the density of quasiparticle thermal excitations called rotons and phonons within the material.

Thanks to this quantum mechanical heat-transfer effect, materials that exhibit second sound have a very high thermal conductivity. Until now, however, the presence of these thermal waves was largely confined to exotic superfluids in which momentum is conserved during collisions between phonons. In most ordinary materials, a process known as Umklapp phonon-phonon scattering causes the phonons to exchange momentum with the material’s crystal lattice, meaning that phonon momentum is not conserved.

Thermal waves at room temperature in a semiconductor

Researchers from the Institute of Materials Science of Barcelona (ICMAB, CSIC) and collaborators at the Universitat Autònoma de Barcelona (UAB) and the University of Cagliari have now, unexpectedly, observed thermal waves at room temperature in solid germanium – a semiconductor that is widely employed in electronics.

In their experiments, which they report in Science Advances, the researchers studied how a germanium sample behaves when subjected to lasers that produce high-frequency megahertz-range oscillating heating waves on its surface. Contrary to predictions, the heat did not dissipate by diffusion, but partially propagated into the material via thermal waves.

This type of thermal transport, being wave-like, has many of the advantages offered by waves, including interference and diffraction, says ICMAB team member Sebastian Reparaz. “The technique we employed could allow us to observe such wave-like heat transport in other materials, by modulating the temperature field of the laser at sufficiently high frequencies,” he explains. This, in turn, could lead to new ways of controlling heat transport in electronics devices made from germanium and any other materials that exhibit second sound. Ultimately, Reparaz says, the discovery “could also allow us to design a new generation of thermal devices in the same way that those based on light were developed”.

Reparaz adds that the second sound thermal regime might also lead to a rethink on how scientists and engineers deal with waste heat in electronic devices. Many such devices, including solar cells, light-emitting diodes and phone batteries, generate significant amounts of heat. This can lead to localized overheating, which decreases the devices’ efficiency and lifespan.

Unifying current theoretical models

From a theory perspective, the new findings might make it possible to unify models for second sound. Until now, theorists treated materials exhibiting this effect as being somehow very different from the semiconductor materials used in everyday electronic chips, says F Xavier Alvarez of the UAB. “Now all these materials can be described using the same equations,” he explains. “This observation establishes a new theoretical framework that may allow in the not-too-distant future a significant improvement in the performance of our electronic devices.”

The researchers say they will now try to observe high frequency thermal waves in other materials at room temperature. “We also want to study how we can exploit thermal wave interference and diffraction to control heat propagation,” Reparaz tells Physics World.

Cosmic challenge: protecting supercomputers from an extraterrestrial threat

In 2013 a gamer by the name “DOTA_Teabag” was playing Nintendo’s Super Mario 64 and suddenly encountered an “impossible” glitch – Mario was teleported into the air, saving crucial time and providing an advantage in the game. The incident – which was recorded on the livestreaming platform Twitch – caught the attention of another prominent gamer “pannenkoek12”, who was determined to explain what had happened, even offering a $1000 reward to anyone who could replicate the glitch. Users tried in vain to recreate the scenario, but no-one was able to emulate that particular cosmic leap. Eight years later, “pannenkoek12” concluded that the boost likely occurred due to a flip of one specific bit in the byte that defines the player’s height at a precise moment in the game – and the source of that flipping was most likely an ionizing particle from outer space.

The impact of cosmic radiation is not always as trivial as determining who wins a Super Mario game, or as positive in its outcome. On 7 October 2008 a Qantas flight en route from Singapore to Australia, travelling at 11,300 m, suddenly pitched down, with 12 passengers seriously injured as a result. Investigators determined that the problem was due to a “single-event upset” (SEU) causing incorrect data to reach the electronic flight instrument system. The culprit, again, was most likely cosmic radiation. An SEU bit flip was also held responsible for errors in an electronic voting machine in Belgium in 2003 that added 4096 extra votes to one candidate.

Cosmic rays can also alter data in supercomputers, which often causes them to crash. It’s a growing concern, especially as this year could see the first “exascale” computer – able to calculate more than 1018 operations per second. How such machines will hold up to the increased threat of data corruption from cosmic rays is far from clear. As transistors get smaller, the energy needed to flip a bit decreases; and as the overall surface area of the computer increases, the chance of data corruption also goes up.

As transistors get smaller, the energy needed to flip a bit decreases; and as the overall surface area of the computer increases, the chance of data corruption also goes up

Fortunately, those who work in the small but crucial field of computer resilience take these threats seriously. “We are like the canary in the coal mine, we’re out in front, studying what is happening,” says Nathan DeBardeleben, senior research scientist at Los Alamos National Laboratory in the US. At the lab’s Neutron Science Centre, he carries out “cosmic stress-tests” on electronic components, exposing them to a beam of neutrons to simulate the effect of cosmic rays.

While not all computer errors are caused by cosmic rays (temperature, age and manufacturing errors can all cause problems too), the role they play has been apparent since the first supercomputers in the 1970s. The Cray-1, designed by Seymour Roger Cray, was tested at Los Alamos (perhaps a mistake given that its high altitude, 2300 m above sea level, makes it even more vulnerable to cosmic rays).

Cray was initially reluctant to include error-detecting mechanisms, but eventually did so, adding what became known as parity memory – where an additional “parity” bit is added to a given set of bits. This records whether the sum of all the bits is odd or even. Any single bit corruption will therefore show up as a mismatch. Cray-1 recorded some 152 parity errors in its first six months (IEEE Trans. Nucl. Sci. 10.1109/TNS.2010.2083687). As supercomputers developed, problems caused by cosmic rays did not disappear. Indeed, in 2002 when Los Alamos installed ASCI Q, then the second fastest supercomputer in the world, initially it couldn’t run for more than an hour without crashing due to errors. The problem only eased when staff added metal side panels to the servers, allowing it to run for six hours.

Cosmic chaos

Cosmic rays originate from the Sun or cataclysmic events such as supernovae in our galaxy or beyond. They are largely made up of high-energy protons and helium nuclei, which move through space at nearly the speed of light. When they strike the Earth’s atmosphere they create a secondary shower of particles, including neutrons, muons, pions and alpha particles. “The ones that survive down to ground level are the neutrons, and largely they are fast neutrons,” explains instrument scientist Christopher Frost, who runs the ChipIR beamline at the Rutherford Appleton Laboratory in the UK. It was set up in 2009 to specifically study the effects of irradiating microelectronics with atmospheric-like neutrons.

Millions of these neutrons strike us each second, but only occasionally do they flip a computer memory bit. When a neutron interacts with the semiconductor material, it deposits charge, which can change the binary state of the bit. “It doesn’t cause any physical damage, your hardware is not broken; it’s transient in nature, just like a blip,” explains DeBardeleben. When this happens, the results can be completely unobserved or can be catastrophic – the outcome is purely coincidental.

figure 1

Computer scientist Leonardo Bautista-Gomez, from the Barcelona Supercomputing Center in Spain, compares these errors to the mutations radiation causes to human DNA. “Depending on where the mutation happens, these can create cancer or not, and it’s very similar in computer code.” Back at the Rutherford lab, Frost – working with computer scientist Paolo Rech from the Institute of Informatics of the Federal University of Rio Grande do Sol, Brazil – has also been studying an additional source of complications, in the form of lower energy neutrons. Known as thermal neutrons, these have nine orders of magnitude less energy than those coming directly from cosmic rays. Thermal neutrons can be particularly problematic when they collide with boron-10, which is found in many semiconductor chips. The boron-10 nucleus captures a neutron, decaying to lithium and emitting an alpha particle.

Frost and Rech tested six commercially available devices, run under normal operating conditions and found they were all impacted by thermal neutrons (J. Supercomput. 77 1612). “In principle, you can use extremely pure boron-11” to be rid of the problem, says Rech, but he adds that this increases the cost of production. Today, even supercomputers use commercial off-the-shelf components, which are likely to suffer from thermal neutron damage. Although cosmic rays are everywhere, thermal neutron formation is sensitive to the environment of the device. “Things containing hydrogen [like water], or things made from concrete, slow down fast neutrons to thermal ones,” explains Frost. The researchers even found the weather affected thermal neutron production, with levels doubling on rainy days.

Preventative measures

While the probability of errors is still relatively low, certain critical systems employ redundancy measures – essentially doubling or tripling each bit, so errors can be immediately detected. “You see this particularly in spacecraft and satellites, which are not allowed to fail,” says DeBardeleben. But these failsafes would be prohibitively expensive to replicate for supercomputers, which often run programmes lasting for months. The option of stopping the neutrons reaching these machines altogether is also impractical – it takes three metres of concrete to block cosmic rays – though DeBardeleben adds that “we have looked at putting data centres deep underground”.

Today’s supercomputers do run more sophisticated versions of parity memory, known as error-correcting code (ECC). “About 12% of the size of the data [being written] is used for error-correcting codes,” adds Bautista-Gomez. Another important innovation for supercomputers has been “checkpointing” – the process of regularly saving data mid-calculation, so that if errors cause a crash, the calculation can be picked up from the last checkpoint. The question is how often to do this? Checkpointing too frequently costs a lot in terms of time and energy; but not often enough and you risk losing months of work, when it comes to larger applications. “There is a sweet spot where you find the optimal frequency,” says Bautista-Gomez.

The fear of the system crashing and a loss of data is only half the problem. What has started to concern Bautista-Gomez and others is the risk of undetected or silent errors – ones that do not cause a crash, and so are not caught. The ECC can generally detect single or double bit flips, says Bautista-Gomez, but “beyond that, if you have a cosmic ray that changes three bits in the memory cell, then the codes that we use today will most likely be unable to detect it”.

Until recently, there was little direct evidence of such silent data-corruption in supercomputers, except what Bautista-Gomez describes as “weird things that we don’t know how to explain”. In 2016, together with computer scientist Simon McIntosh-Smith from the University of Bristol, UK, he decided to hunt for these errors using specially designed memory-scanning software to analyse a cluster of 1000 computer nodes (data points) without any ECC. Over a year they detected 55,000 memory errors. “We observed many single-bit errors, which was expected. We also observed multiple double-digit errors, as well as several multi-bit errors that, even if we had ECC, we wouldn’t have been seen,” recalls Bautista-Gomez (SC ‘16: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis 10.1109/SC.2016.54).

Accelerated testing

The increasing use of commercial graphics processing units (GPUs) in high-performance computing, is another problem that worries Rech. These specialized electronic circuits have been designed to rapidly process and create images. As recently as 10 years ago they were only used for gaming, and so weren’t considered for testing says Rech. But now these same low-power, high-efficiency devices are being used in supercomputers and in self-driving cars, so “you’re moving into areas where its failure actually becomes critical” adds Frost.

Rech, using Frost’s ChipIR beamline, devised a method to test the failure rate of GPUs produced by companies like Nvidia and AMD that are used in driverless cars. They have been doing this sort of testing for the last decade and have devised methods to expose devices to high levels of neutron irradiation while running an application with an expected outcome. In the case of driverless car systems, they would essentially show the device pre-recorded videos to see how well it responded to what they call “pedestrian incidents” – whether or not it could recognize a person.

self-driving car

Of course, in these experiments the neutron exposure is much higher than that produced by cosmic rays. In fact, it’s roughly 1.5 billion times what you would get at ground level, which is about 13 neutrons cm–2 hr–1. “So that enables us to do this accelerated testing, as if the device is in a real environment for hundreds of thousands of years,” explains Frost. Their experiments try to replicate 100 errors per hour and, from the known neutron flux, can calculate what error rate this would represent in the real world. Their conclusion: an average GPU would experience one error every 3.2 years.

This seems low, but as Frost points out, “If you deploy them in large numbers, for example in supercomputers, there may be several thousand or if you deploy them in a safety-critical system, then they’re effectively not good enough.” At this error rate a supercomputer with 1800 devices would experience an error every 15 hours. When it comes to cars, with roughly 268 million cars in the EU and about roughly 4% – or 10 million cars – on the road at any given time, there would be 380 errors per hour, which is a concern.

Large scale

The continued increase in the scale of supercomputers is likely to exacerbate the problem in the next decade. “It’s all an issue of scale,” says DeBardeleben, adding that while the first supercomputer Cray-1 “was as big as a couple of rooms…our server computers today are the size of a football field”. Rech, Bautista-Gomez and many others are working on additional error-checking methods that can be deployed as supercomputers grow. For self-driving cars, Rech has started to analyse where the critical faults arise within GPU chips that could cause accidents, with a view to error correcting only these elements.

Another method used to check the accuracy of supercomputer simulations is to use physics itself. “In most scientific applications you have some constants, for example, the total energy [of a system] should be constant,” explains Bautista-Gomez. So every now and then, we check the application to see whether the system is losing energy or gaining energy. “And if that happens, then there is a clear indication that something is going wrong.”

Both Rech and Bautista-Gomez are making use of artificial intelligence (AI), creating systems that can learn to detect errors. Rech has been working with hardware companies to redesign the software used in object detection in autonomous vehicles, so that it can compare consecutive images and do its own “sense check”. So far, this method has picked up 90% of errors (IEEE 25th International Symposium on On-Line Testing and Robust System Design 10.1109/IOLTS.2019.8854431). Bautista-Gomez is also developing machine-learning strategies to constantly analyse data outputs in real-time. “For example, if you’re doing a climate simulation, this machine-learning [system] could be analysing the pressure and temperature of the simulation all the time. By looking at this data it will learn the normal variations, and when you have a corruption of data that causes a big change, it can signal something is wrong.” Such systems are not yet commonly used, but Bautista-Gomez expects they will be needed in the future.

Quantum conundrum

Looking even further into the future, where computing is likely to be quantum, cosmic rays may pose an even bigger challenge. The basic unit of quantum information – the qubit – is able to exist in three states, 0, 1 and a mixed state that enables parallel computation and the ability to handle calculations too complex for even today’s supercomputers. It’s still early days in their development, but IBM announced it plans to launch the 127-qubit IBM Quantum Eagle processor sometime this year.

For quantum computers to function, the qubits must be coherent – that means they act together with other bits in a quantum state. Today the longest period of coherence for a quantum computer is around 200 microseconds. But, says neutrino physicist Joe Formaggio at the Massachusetts Institute of Technology (MIT), “No matter where you are in the world, or how you construct your qubit [and] how careful you are in your set up, everybody seems to be petering out in terms of how long they can last.” William Oliver, part of the Engineering Quantum Systems Group at MIT, believes that radiation from cosmic rays is one of the problems, and with Formaggio’s help he decided to test their impact.

Illustration of qubit radiation

Formaggio and Oliver designed an experiment using radioactive copper foil, producing the isotope copper-64, which decays with a half-life of just over 12 hours. They placed it in the low-temperature 3He/4He dilution refrigerator with Oliver’s superconducting qubits. “At first he would turn on his apparatus and nothing worked,” describes Formaggio, “but then after a few days, they started to be able to lock in [to quantum coherence] because the radioactivity was going down. We did this for several weeks and we could watch the qubit slowly get back to baseline.” The researchers also demonstrated the effect by creating a massive two-tonne wall of lead bricks, which they raised and lowered to shield the qubits every 10 minutes, and saw the cycling of the qubits’ stability.

From these experiments they have predicted that without interventions, cosmic and other ambient radiation will limit qubit coherence to a maximum of 4 milliseconds (Nature 584 551). As current coherence times are still lower than this limit, the issue is not yet a major problem. But Formaggio says as coherence times increase, radiation effects will become more significant. “We are maybe two years away from hitting this obstacle.”

Of course, as with supercomputers, the quantum-computing community is working to find a way around this problem. Google has suggested adding aluminium film islands to its 53-qubit Sycamore quantum processor. The qubits are made from granular aluminium, a superconducting material containing a mixture of nanoscale aluminium grains and amorphous aluminium oxide. They sit on a silicon substrate and when this is hit by radiation, photons exchange between the qubit and substrate, leading to decoherence. The hope is that aluminium islands would preferentially trap any photons produced (Appl. Phys. Lett. 115 212601).

Another solution Google has proposed is a specific quantum error-correction code called “surface code”. Google has developed a chessboard arrangement of qubits, with “white squares” representing data qubits that perform operations and “black squares” detecting errors in neighbouring qubits. The arrangement avoids decoherence by relying on the quantum entanglement of the squares.

In the next few years, the challenge is to further improve the resilience of our current supercomputer technologies. It’s possible that errors caused by cosmic rays could become an impediment to faster supercomputers, even if the size of components continues to drop. “If technologies don’t improve, there certainly are limits,” says DeBardeleben. But he thinks it’s likely new error-correcting methods will provide solutions: “I wouldn’t bet against the community finding ways out of this.” Frost agrees: “We’re not pessimistic at all; we can find solutions to these problems.”

Atom cavity sees the same photon twice

For the first time, physicists have succeeded in measuring the same photon at two different locations within an optical fibre – all without destroying the photon. The new non-destructive technique, which was developed by researchers at the Max Planck Institute of Quantum Optics (MPQ) in Germany, is based on the principles of cavity quantum electrodynamics and could aid the development of quantum communications networks that rely on information-carrying photons.

Although researchers are generally able to detect itinerant photons, the detectors they use invariably destroy the photons being measured. Alternative, non-destructive quantum measurements have important applications in many areas of physics, including quantum sensing, quantum computing and quantum communications.

Quantum non-demolition detector

A team led by Stephan Welte and Emanuele Distante has now developed a “quantum non-demolition” (QND) detector to address this problem. This QND detector consists of a single rubidium atom that has been prepared in a known quantum state that is coupled to a reflective optical cavity. The researchers placed two of these detectors 60 metres apart in an optical fibre. They then used small lengths of additional fibre to connect the detectors to the main fibre, placing “circulators” at the fibre intersections to direct the flow of a beam of laser photons that they sent into the fibre. As a photon enters a circulator, it gets directed towards a detector before being reflected from it and guided back along the main fibre in its original direction.

Improved time resolution

The MPQ team now plan to improve the time resolution of their detection process. This will allow them to more precisely determine the direction in which the measured photon is travelling – information that is only accessible with QND detectors.

They also hope to improve their system so that fewer photons are lost between the two detectors. “Such a non-destructive system could be employed to herald photon loss in a glass fibre,” Welte says. “Once the photon loss has been detected, a given protocol could be stopped and restarted immediately by sending in another new photon,” he tells Physics World. “This way, the rate of the protocol could be increased.”

The research is detailed in Physical Review Letters.

3D-printed steel bridge, summer science experiments, water-repellent life jackets

The world’s first 3D-printed steel footbridge has been unveiled in the centre of Amsterdam. Developed by Imperial College London and the Alan Turing Institute, the 12 m-long bridge took over four years to design and contains a network of sensors to monitor its performance. Data from the sensors will then be used to create a computerised version allowing researchers to analyse the bridge’s behaviour when handling pedestrian traffic.

“A 3D-printed metal structure large and strong enough to handle pedestrian traffic has never been constructed before,” says Leroy Gardner from Imperial. “We have tested and simulated the structure and its components throughout the printing process and upon its completion, and it’s fantastic to see it finally open to the public.”

Summerfun

With the start of the school holidays upon us, parents with young children may well be dreading the prospect of looking after unruly and bored kids for weeks on end. Thankfully, help is at hand courtesy of Smart Energy GB, who have kindly outlined seven science experiments you can do at home during the holidays.

Requiring only common household items, the experiments include making a balloon hovercraft, a pinhole camera as well as showing the magnetic effect of your breakfast cereal. For the full list of activities, see here.

Keeping with the summer theme, swimsuits and life jackets can be essential items, but if not dried thoroughly after use, they can develop a strong, musty smell. Now researchers have created a buoyant cotton fabric that is also water repellent, which could be used in future to avoid the threat of mould build-up.

Cotton is hydrophilic, allowing liquids and oil inside and previous attempts to make garments repel liquid have often involved spraying the material with “superamphiphobic coatings”. However, this technique is impractical for large-scale manufacturing given that it requires multiple, time-consuming steps.

The researchers, based in China, have created a “one-step” coating process that results in a fabric that is liquid proof and also stays afloat under 35 times its weight. Watch a video of the new material’s hydrophobic properties here.

Investment in defence R&D sparks recruitment drive

In 2020, as part of a comprehensive review of defence spending, the UK government underlined the strategic importance of science and technology for national defence and security. The review earmarked an additional £6bn for research and development at the Ministry of Defence (MoD) over the next four years, with an extra £1.1bn allocated to so-called pull-through activities – ensuring that innovations designed initially for the military lead to wider applications in the commercial sector.

That extra funding has led to a major recruitment drive at the Defence Science and Technology Laboratory (Dstl), the scientific division of the MoD. Hundreds of positions for scientists, engineers and project managers need to be filled over the next few months, with more to follow in 2022. In the vanguard are 70 vacancies that are now being advertised for physicists at all stages of their careers, while one of Dstl’s dedicated graduate programmes is also doubling its intake of physicists, up from around 15–20 per year to 40 new graduate positions in both 2021 and 2022.

The skill set offered by physicists is needed for a lot of our current vacancies.

Karen Smith, a talent acquisition and planning adviser at Dstl

“The skill set offered by physicists is needed for a lot of our current vacancies, which need expertise in areas such as lasers, electro-optics and electromagnetic phenomena,” says Karen Smith, a talent acquisition and planning adviser at Dstl. “Focusing on the physicist role in our first tranche of recruitment will build a solid foundation for building up our capabilities in other areas.” All in all, the recruitment drive represents an uplift of around 10% in Dstl’s ranks of scientists and engineers – which already stands at more than 3000 members of staff.

Photo of Dstl scientist Mark Pickering

Mark Pickering, a physicist who has worked at Dstl since 2012, stresses the importance of his scientific training for his role as the UK’s technical lead for close-combat guided weapons systems. “I do a lot of work on missile subsystems such as sensors, aerodynamics and rocket physics, and as a result I directly use my physics knowledge and skills to solve a huge variety of physics-based problems,” he says. “It means that I can predict how something might behave from basic principles, and what to investigate as part of the experiment design.”

[At Dstl] you gain a deeper understanding of something, and it’s more satisfying to build up that understanding from scratch.

Omar Sarsah, Dstl

Omar Sarsah, who joined Dstl a couple of years ago as a new physics graduate, has also enjoyed translating the abstract ideas he learnt during his degree into real-world situations. “It turns out that a helicopter has an almost perfect black-body curve, but you have to compensate for various factors, such as the atmosphere and the camera you use to detect the emission,” he explains. “These are all things that you study separately at university, but here you get to put them together. You gain a deeper understanding of something, and it’s more satisfying to build up that understanding from scratch.”

One of the key attractions for both Pickering and Sarsah is the ability to contribute to a variety of different projects. Sarsah, who specializes in electro-optics, might work on several projects at a time – which might involve anything from electronics to highly sensitive quantum-optical systems. For the most part, however, he uses his specific knowledge of cameras and optical modelling to evaluate and improve the performance of different systems or platforms. In the case of the helicopter, this type of analysis revealed that the engine generates bright emission in the infrared, which prompted the use of engine covers to improve stealth and, ultimately, the safety of military personnel.

“You are presented with a problem that no one has yet solved, you think about the best way to tackle it, and then you put it to the test,” explains Pickering. “It’s really rewarding to be involved throughout the entire process of turning what might be quite a nebulous idea into something that makes a real difference.”

As an example, Pickering describes how he was recently trialling some new equipment that needed to be tested onboard a flying aircraft. “I was in a control room directing what trials the aircraft should be doing, all following my experimental plan,” he says. “It was the magical fulfilment of seeing something you’ve worked on for several years coming to fruition. It was an amazing feeling.”

Seeing something you’ve worked on for several years coming to fruition was an amazing feeling.

Mark Pickering, Dstl

An essential part of this project work is the need to join forces with scientists and engineers with different backgrounds and skill sets. “We need to develop things that people can use, and so everyone needs to work together to make sure each element interacts with everything else in the system,” comments Sarsah. “I might be able to design parameters for a specific camera, but when it needs to be integrated into a real-world system we also need to make sure the electronics line-up, that the mechanical stability is fine – and even that it goes in the right way round.”

Close collaboration is also needed with research teams in industry and academia, something that will become even more important with the extra funding for R&D. Pickering already acts as a technical partner on several of these collaborative projects, providing technical expertise and ensuring that any external research meets the objectives set by the MoD. He also serves as the technical lead on several procurement programmes, providing scientific advice to government and front-line commands, and fulfils a similar role with international organizations.

Newer members of staff also have plenty of opportunities to gain different experiences and to get involved with external collaborations. Pickering started his career on secondment to a naval base, and one of his graduates is just about to start a six-month placement with one of Dstl’s industry partners. Meanwhile, Sarsah regularly works off-site, gathering test data from aircraft or from ballistic systems being fired on an outdoor range, and currently sits on one of NATO’s technical panels. He is now considering various options for progressing his career at Dstl, which could include a secondment in the UK or overseas, a move to the MoD headquarters in Whitehall, or studying part-time for a PhD.

From prior experience, Pickering says that it’s incredibly easy to switch domains or to move around the organization. While his sights are now firmly focused on a technical career path, scientists and engineers employed by Dstl can also choose to specialize in people management, project management or operational analysis – which seeks to optimize the performance of a whole system rather than each specific element. “Technical and analytical are pretty interchangeable, and lots of people fly back and forth between the two,” comments Pickering.

Anyone thinking of joining Dstl should not be concerned that they do not have enough knowledge of the technologies they will be working with. “No-one is expected to have direct expertise of these systems because the work you will be involved with is so unique,” says Sarsah. ” I spent two weeks at the Defence Academy in Shrivenham when I started, and other courses and seminars are arranged to help you understand the domain you are working in. You also learn lots of things very quickly by doing project work.”

There is also plenty of support for both new and experienced employees. When he started Sarsah was the only member of his team with expertise in optical modelling, and he was overwhelmed with the amount of work that was coming his way. “My team leader helped to organize which projects took priority,” he says. “She stood by me and said that I couldn’t do everything.”

Photo of Omar Sarsah

Sarsah was also surprised and delighted that his team nominated him for a NATO early-career award. “When you start out you can be a bit nervous because you don’t know everything and there’s so much knowledge and expertise around,” he says. “Although I didn’t win the award, it showed me that I was making a valuable contribution to the team.”

Pickering has also been impressed with the help and support that Dstl offers to its employees. He is extremely dyslexic, particularly when it comes to writing, and Dstl has always ensured that additional systems and resources are available to help review and refine his written work. “I have always been impressed at how far they are willing to go to look after people,” he comments. “Dstl really cares about its employees.”

That level of support makes Dstl an open and inclusive place to work. Internal support networks, run by employees for other employees with similar interests and needs, help staff to discuss problems, share advice, and raise any issues with the executive team. Fully flexible working is also available as standard, including alternative working patterns, job share, and variable working hours. “We want all our employees to maximize their potential,” comments Smith.

For their part, it’s clear Pickering and Sarsah are primarily motivated by the diverse opportunities they have to learn new science and new skills, and to work on projects that have a real impact on people’s lives. “It’s been the best job I could ever ask for,” says Pickering. “I’ve really enjoyed everything I’ve done while I have been at Dstl.”

• Some of the names in this article have been changed for privacy reasons.

Ancient star likely created from a colossal hypernova explosion

An ancient star lying on the fringes of the Milky Way likely contains the remnants of a colossal hypernova explosion, which took place early on in the galaxy’s star-forming period. That’s the conclusion of an international team of astronomers, led by David Yong at the Australian National University, who discovered that the star’s abundance of heavy elements could have only been synthesized in the highly energetic “r-process”. Their findings provide the first evidence for magneto-rotational hypernovae, and uncover their role in the changing chemical makeup of the early universe.

Astronomers predict that around half of all heavy atomic nuclei in the universe must have originated in a succession of rapid neutron captures, named the r-process. The sites where these captures take place are still poorly understood, but according to current theories, mergers between neutron stars are thought to play an important role. In the latest models of chemical evolution in galaxies, however, these mergers alone can’t reproduce the abundances of heavy elements that we observe today.

To search for alternative origins, Yong’s team looked to the halo of the Milky Way – which contains an abundance of ancient stars born early on in the galaxy’s star-forming history. The astronomers made their observations using the European Southern Observatory’s Very Large Telescope (VLT) in Chile, and the Australian National University’s SkyMapper telescope in New South Wales, which has previously been used to identify thousands of these chemically primitive stars in the halo.

In a star named SMSS J200322.54−114203.3, Yong and colleagues noted a high abundance of r-process elements, including zinc, uranium, europium and possibly gold, despite it being extremely metal-poor compared with stars of similar ages. Through their analysis, the researchers concluded that these abundances could have only been produced in a colossal explosion named a magneto-rotational hypernova. These as-yet unobserved events are triggered as the core of a rapidly spinning, highly magnetized star, 25 times more massive than our Sun, collapses into a black hole – releasing 10 times more energy than a conventional supernova.

“It’s an explosive death for the star,” says Yong in a press statement. “We calculate that 13 billion-years ago J200322.54-114203.3 formed out of a chemical soup that contained the remains of this type of hypernova. No one’s ever found this phenomenon before.”

Such a dramatic explosion would provide ideal conditions for the r-process, producing an abundance of heavy elements. Alongside this process, the hypernovae would also eject high levels of lighter elements, formed during the progenitor star’s evolution; as well as elements close to the “iron peak” in universal abundance, formed during explosive nuclear burning. As a result, although the remnants should be metal-poor overall, they should contain all stable elements of the periodic table at once.

Based on this evidence, Yong’s team concluded that SMSS J200322.54−114203.3 is made up of the hypernova-ejected remnants of a short-lived, even more ancient star, which underwent a magneto-rotational hypernova just around one billion years after the Big Bang. Their discovery provides key evidence for an as-yet unconsidered site for the r-process, and could lead to better explanations for how heavy elements were first synthesized, early on in the galaxy’s star-forming history.

The research is published in Nature.

A decathlon of questions on the physics of sport

Photos of high jump, hurdles and discus

Day 1

100 m

In 1977 fully automatic timing, as opposed to timing done by people with stopwatches, became mandatory for world records in the 100 m sprint. Immediately after this change, average recorded times of sprinters increased slightly, before decreasing again. How did automatic timing cause this single stepwise increase?

Long jump

Many of the best long-jumpers in the world appear to continue running in the air as they cycle their legs for a few steps after take-off, in a technique called the hitch kick. What is the purpose of this motion?

Shot put

The women’s shot put has a mass of about 4 kg, but the volume varies slightly. If it can be made of solid iron or solid brass, what is the range of possible diameters it could have?

High jump

In the high jump, athletes traditionally keep their body upright as they kick their legs over the bar. But at the 1968 Olympic Games in Mexico City, American high-jumper Dick Fosbury won gold using a new technique he had developed. Now called the Fosbury flop, it involves slinking backwards over the bar and landing on your back. What physical principle does the Fosbury flop use to help an athlete clear a higher bar?

400 m

In the 400 m race, the starting line is staggered across lanes to ensure that all athletes have the same distance to run while staying in their lanes. Why does World Athletics say that the number of lanes for a standard track should be no more than nine?

Day 2

110 m hurdles

The men’s 110 m hurdle event has 10 hurdles spaced at 9.14 m from one another. The take-off foot should touch the ground at 2.1–2.2 m in front of each hurdle, and the athlete normally lands about 1 m from the hurdle. Most athletes take three steps between hurdles (not including the hurdle jump). About how long should each stride be?

Discus

The theoretical optimal angle for throwing an object as far as possible is 45 degrees to the ground. However, most athletes have an optimal angle slightly smaller. Why is this?

Pole vault

For an athlete with a centre of mass 1 m above the ground, who can run at 10 m/s, what is the theoretical limit to the pole vault height they can clear? And why is the pole vault world record slightly above this?

Javelin

In 1986 the men’s javelin was redesigned so that the centre of mass moved 4 cm closer to the tip. The women’s javelin was similarly redesigned in 1999. Which two problems prompted this redesign, and how did it solve them?

1500 m

The 2016 Olympic Games in Rio de Janeiro saw the slowest 1500 m Olympic event since 1932. Why is this event getting slower?

Answers

100 m: People with stopwatches tend to underestimate sprinters’ times. This is because they start measuring slightly after the runners have taken off, due to non-zero reaction times. Their reaction times to the runners crossing the finish line are not as long, because they are watching the runners and can anticipate when they will get to the end.

Long jump: Long-jumpers generate some angular momentum as they take off, which would cause their body to rotate forwards so that they are leading with their face, making it difficult to land on their feet. By cycling their legs, they take care of the angular momentum while keeping their body upright so that they can land feet-first. This is similar to how you might instinctively swing your arms when you lose balance. Not all long-jumpers use this technique, though. Some use the “hang-style” technique, in which they kick their legs forward after take-off, so that they are horizontal to the ground. They also fold their body forward over their legs. This lengthens their moment of inertia, so that the angular momentum generated causes their body to rotate less.

Shot put: Using density of iron as 7.86 g/cm3 the diameter is 9.91 cm. Using density of brass as 8.73 g/cm3 the diameter is 9.56 cm. The official range in the regulations is given as 95-110 mm. The shot put is not always just one material. It is sometimes made of a smaller lead weight in a metal casing of lower density, leading to a wider range of possible diameters.

High jump: When an athlete slinks over the bar using the Fosbury flop technique, rather than going over it with their body upright, there is no point in time at which their whole body is above the bar. In fact, there is no point in time at which an athlete’s centre of mass goes above the bar – their centre of mass actually goes below the bar. This means that they have to generate less energy to clear the bar than they would if they went over it with their body upright, which would require them to raise their centre of mass higher. Therefore, at the maximum energy they can generate, they can clear a higher bar with the Fosbury flop technique.

400 m: The curved part of the track is sharper on the inside lane than on the outside lane, due to the increasing radius of curvature from the inside to the outside. It is harder to run around a sharper bend, so having a gentler curve may confer an advantage on the runner in the outside lane. However, World Athletics (formerly the IAAF) considers this difference to be mostly negligible, only becoming significant if there are more than nine lanes in the track.

110 m hurdles: The stride length should be about 2 m per step. The spacing of hurdles means that “rhythmic running”, with regular stride lengths, is more important in hurdle events than in flat races.

Discus: The basic model of an object being thrown from ground level with a given force gives 45° as the optimal angle to maximise the distance it will travel before hitting the ground. However, the real-world scenario is more complex. Athletes do not throw from the ground level, but a little above it depending on their height. The object can therefore be considered to be starting at a different point in the parabolic model. You can extrapolate the object’s motion backwards behind the athlete to imagine it starting at ground level. Imagining it taking off from the ground at 45°, you find that by the time it reaches the athlete’s hand, its angle to the horizontal would be less than this value. The athlete should therefore throw it at this lower angle to maximise its distance. There are also biomechanical factors, as athletes may be able to generate a greater force by throwing at an angle closer to the horizontal, due to how our muscles are arranged anatomically. Finally, air resistance also plays a role in real-world scenarios. Since it slows down the object’s horizontal motion and reduces the distance it travels, throwing it with a greater horizontal component may help to counteract this.

Pole vault: Equating kinetic energy with gravitational potential energy, the change in height is about 5.1 m. Adding the initial centre of mass being at about 1 m, this gives a theoretical maximum of about 6.1 m. The world record is currently 6.18 m. The extra height could be because, similarly to high-jumpers, pole-vaulters often use a technique where their centre of mass goes under the bar. Also, as the pole straightens, the athlete pushes off the end of it, putting more energy into the system, which gets added to the kinetic energy they generated in the run-up.

Javelin: The two problems were that 1) athletes were throwing the javelins increasingly far, which started to become dangerous as the javelins could go beyond the field into the audience, and 2) the javelin would often land flat on the ground rather than pointing down into the ground, leading to ambiguity around what was a valid, qualifying throw. When the centre of mass was moved towards the point of the javelin, the javelin’s trajectory changed, so that it started to point downwards sooner in its motion. This reduced the distances it could be thrown to a safer level and reduced the number of instances where it landed flat instead of point-down.

1500 m: The 1500 m is a very tactical race. Rather than trying to run it in the shortest time they can, runners often try to conserve their energy by running slowly throughout the race, staying behind the person in the lead. They then try to win by sprinting at the end. However, this means that no one wants to strike out and be the person in the lead, because that would mean wasting energy that the other athletes conserve, and would probably lead to being overtaken later. So none of the athletes end up running very fast because none of them wants to be in the lead. This leads to an example of a “Nash equilibrium”, which is a concept in game theory where no participant has anything to gain by changing their strategy. In some competitions, pacesetters run parts of the race with the athletes, which encourages them to run faster rather than running strategically. However, no pacesetters run in the Olympic Games, and athletes would usually rather win an Olympic gold medal than beat their own personal best.

  • Many thanks to Steve Haake, professor of sports engineering at Sheffield Hallam University for checking these answers for accuracy

‘Iron man of 2D materials’ defies century-old description of fracture mechanics

Fracture tests carried out on hexagonal boron nitride (h-BN) show that this 2D material has an intrinsic toughening mechanism, contradicting its reputation for brittleness. This unexpected behaviour, which was observed by Jun Lou, Huajian Gao and colleagues at Rice University in the US, defies a description of fracture mechanics first put forward by the British engineer A A Griffith in 1921 and still employed today to measure material toughness.

As with most materials, cracks in 2D materials typically form at sites of concentrated stress. The unique structure of 2D materials, however, means that cracks can propagate straight through, opening up the bonds between individual atoms like a zipper.

To investigate this cracking mechanism in h-BN, the Rice researchers subjected samples of single-crystal monolayers of the material to tensile loads in a micromechanical device. They found that in contrast to graphene – a one-atom-thick sheet of carbon that structurally resembles monolayer h-BN – the growth of cracks in h-BN was surprisingly stable, with cracks forming branches as the tensile load increased. This branching means that additional energy is required to drive the crack further, effectively making the material tougher, Lou explains. Overall, h-BN is 10 times more fracture-resistant than graphene, defying Griffith’s formula and leading the Rice University press office to describe it as “the iron man of 2D materials”.

Good news for flexible electronics

The atoms in both graphene and h-BN are arranged almost identically, in a flat hexagonal lattice structure. However, the researchers say that the slight asymmetries that arise in a material containing two elements (boron and nitrogen) instead of just one (carbon) may contribute to the crack branching behaviour in h-BN.

Regardless of the mechanism, h-BN’s newfound toughness is a boon for electronic applications. The material’s resistance to heat, stability to chemicals, and dielectric properties all make it ideal as both a supporting base and an insulating layer for placing between electronic components. The discovery that h-BN is also surprisingly tough means that it could be used to add tear resistance to flexible electronics, which Lou observes is one of the niche application areas for 2D-based materials. For flexible devices, he explains, the material needs to mechanically robust before you can bend it around something. “That h-BN is so fracture-resistant is great news for the 2D electronics community,” he adds.

The team’s findings may also point to a new way of fabricating tough mechanical metamaterials through engineered structural asymmetry. “Under extreme loading, fracture may be inevitable, but its catastrophic effects can be mitigated through structural design,” Gao says.

The researchers report their work in Nature.

Inspired by killer plants

The way that some plants such as the Venus flytrap and Cape sundew move so quickly and precisely has always fascinated scientists. Plants move with biological necessity, whether it is to feast on insects or to spread their spores far and wide. This video looks at the mechanics of moving plants and how it can inspire innovations in soft robotics.

For a more detailed look at the physics of plant motion, take a look at this feature by science writer Daniel Rayneau-Kirkhope, originally published in the July 2021 issue of Physics World.

Copyright © 2026 by IOP Publishing Ltd and individual contributors