Skip to main content

How to keep a skipping stone on a steady path across water

The physics that allows spinning stones to skip across the surface of water has been thoroughly analysed by Jie Tang at Southwest Jiaotong University in China and colleagues. The team used theoretical models and simple experiments to identify three key factors underlying the process and their findings could lead to important insights into the dynamics of aircraft and spacecraft that land on water.

Skipping a flat stone across water is one of the simple pleasures of life. With practice, imparting the right combination of throwing angle, speed, and spin will allow the stone to bounce several times before sinking. The physics involved in this process is also highly relevant in ensuring safe water landings for aircraft and re-entering spacecraft, which collide with water surfaces at high speeds.

In their study, Tang’s team constructed a mathematical model of stone skipping, which incorporated two key effects. The first of these was the Magnus effect, whereby the trajectory of a rotating object in a fluid is deflected – an effect that footballers use to send a ball on a curved trajectory. Secondly, the gyro effect involves the tendency of a spinning object to maintain a steady axis of rotation and travel in a straight line.

Navigation module

To verify their model’s predictions, the team did a simple experiment involving a spinning aluminium disk, fitted with a navigation module to measure its spin and trajectory during flight. Their setup enabled a tight control over the disk’s speed, rate of spin, and angle of approach to the water’s surface. Through a series of experiments, the team then measured how variations in each of these values affected the disk’s skipping dynamics.

From the results of their analysis, the researchers identified three key factors underlying these dynamics. The first factor is related to the upward acceleration of the disk – determined by its velocity and angle of approach to the water. If this value is over 4 g (four times the gravitational acceleration) the disk will skip. Yet at 3.8 g, the disk will instead “surf”,  skimming along the water’s surface at an oscillating angle, but not bouncing.

The second factor is related to how the gyro effect can guarantee the stability of the disk’s angle of approach to the water, creating more favourable conditions for continuous bouncing. The third factor is that the direction of the disk’s trajectory reflects a combination of gyro and Magnus effects. For spin rates lower than 18 rotations per second, the Magnus effect will dominate, and the disk will veer off to the left or right, depending on its direction of spin. Yet above this rate, the gyro effect will dominate, and the disk will continue in a straight path.

If applied to the more complex shapes of air and spacecraft, Tang’s team hope that their results could improve our understanding of how flying vehicles behave when impacting water upon landing. This could soon enable engineers to design better vehicles and flight paths: ensuring both minimal damage to valuable equipment, and better safety for passengers.

The research is described in Physics of Fluids.

Beware of disease-carrying aerosols in toilets, ‘technology-packed tank top’ launched into space

Do not linger after using a public toilet is the advice from researchers at Florida Atlantic University (FAU), who have done a comprehensive study of how aerosols with the potential to carry disease are created and dispersed by flushing toilets and urinals. Siddhartha Verma and colleagues studied three scenarios – toilet flushing, covered toilet flushing and urinal flushing – in a medium-sized public restroom on the FAU campus.

“After about three hours of tests involving more than 100 flushes, we found a substantial increase in the measured aerosol levels in the ambient environment with the total number of droplets generated in each flushing test ranging up to the tens of thousands,” reports Verma.

“Both the toilet and urinal generated large quantities of droplets smaller than 3 micron in size, posing a significant transmission risk if they contain infectious microorganisms. Due to their small size, these droplets can remain suspended [in air] for a long time,” he adds.

Lid makes little difference

The team detected droplets at a height of 1.5 m above a toilet or urinal (face height for many people) and found that the droplets persisted at this height for more than 20 s after the flush. Unfortunately, closing the toilet lid before flushing did not result in a significant reduction in the number of particles detected – which suggests that aerosol particles can easily escape through gaps around the seat and lid.

The observed build-up of aerosols over time suggests that the ventilation in that particular facility was not adequate. “The study suggests that incorporation of adequate ventilation in the design and operation of public spaces would help prevent aerosol accumulation in high occupancy areas such as public restrooms,” says team member Manhar Dhanak.

The team describes its work in Physics of Fluids and they have made the above video to show how aerosols build up after a flush.

Photo of an astronaut wearing a Bio-Monitor shirt

What to wear while floating around the International Space Station (ISS)? How about a “technology-packed tank top” – which the Canadian astronaut David Saint-Jacques models in this photo?

The Bio-Monitor shirt was created for the Canadian Space Agency by Montreal-based Carré Technologies, which produces wearable sensors. Saint-Jacques and fellow astronauts helped researchers at the Schlegel-University of Waterloo Research Institute for Aging evaluate the performance of the shirt in 2019 and now the results have been released.

“The Bio-Monitor shirt allows simultaneous and continuous direct measurements of heart rate, breathing rate, oxygen saturation in the blood, physical activity and skin temperature, and provides a continuous estimate of arterial systolic blood pressure,” explains Waterloo’s Carmelo Mastrandrea.

According to Mastrandrea, the shirt removes need for astronauts to stop what they are doing to make the measurements at regular intervals. It also means that the astronauts can be monitored while doing a range of activities. The crew also wore the shirts on Earth before heading into space and the researchers were able to confirm that astronauts experience a large reduction in physical activity when they are in space. And if astronauts continue to wear the shirt after they have returned to Earth, it could provide early warning of any problems they have re-adapting to gravity back on Earth.

You don’t need to be an astronaut to have a Bio-Monitor shirt – they are available to the public and can be used to monitor sporting performance and health.

Mastrandrea describes the research in a poster at the recent Experimental Biology 2021 conference.

Surface electromagnetic fields mapped in 3D at the nanoscale

The first three-dimensional map of the electromagnetic field that “clings” to the surface of a cube less than 200 nm across casts a fresh light on how materials dissipate heat at the nanoscale. The images, obtained by researchers in France and Austria, reveal the presence of infrared photon-like excitations known as surface phonon polaritons near the cube’s surface – a phenomenon that might be exploited to convey waste heat away from nanoelectronic components and so cool them down.

Phonons are particle-like collective vibrational excitations (or atomic vibrations) that occur in ionic solids. They give rise to oscillating electric fields, which couple with photons at the surface of the solid to create surface phonon polaritons (SPhPs). These hybrids of vibrational and photonic excitations are found only on an object’s surface and are thus typically of little importance in bulk materials. However, their influence dramatically increases as objects shrink and their surface-to-volume ratio increases.

SPhPs also concentrate electromagnetic energy in the mid-infrared (3 to 8 mm) up to the far-infrared (15 to 1000 mm) wavelength range. This property might make it possible to use them in applications such as enhanced (Raman) spectroscopy of molecules.

Visualizing the near field

All such applications depend on the nanostructured electromagnetic field that exists at the surfaces of metamaterials or nanoparticles. Visualizing this so-called near field has, however, proved difficult. Pioneering techniques like electron energy loss spectroscopy (EELS), which works by measuring the energy electrons lose when they encounter these surface fields, can only produce 2D outlines. Other techniques use sophisticated reconstruction algorithms in combination with EELS to generate 3D images of the field, but these were previously restricted to visible wavelengths.

In the new work, Mathieu Kociak and colleagues from the CNRS/Université Paris-Saclay, together with Gerald Kothleitner of Graz University of Technology, combined computer models with a technique called tomographic EELS spectral-imaging to image the 3D field surrounding a nanocrystal of magnesium oxide (MgO). To do this, they used a new-generation scanning-tunnelling electron microscope (STEM) developed for electron and photon spectromicroscopy that can probe the optical properties of matter with ultrahigh energy and spatial resolution. The instrument (a modified NION Hermes 200 called a “Chromatem”) filters a 60-keV electron beam with a monochromator to produce a beam with an energy resolution of between 7 to 10 meV.

Tilting technique

By scanning this electron beam across their sample, Kociak, Kothleitner and colleagues collected high-angle annular dark field images that revealed the shape of the MgO nanocube. They then tilted the sample at various angles, imaged the cube in different orientations and recorded an EELS spectrum at each scan position. Finally, they used image reconstruction techniques to generate 3D images of the field surrounding the crystal.

The new approach, which they describe in Science, will eventually make it possible to target specific points on the crystal and measure localized heat transfer between them. Since many nano-objects absorb infrared light during heat transfer, the technique should also provide 3D images of such transfers. “This is one avenue of exploration for optimizing heat dissipation in the increasingly small components employed in nanoelectronics,” the researchers say.

The team now plans to apply its technique to study more complicated nanostructures. However, Kociak tells Physics World that “some theoretical aspects still need to be better understood” before this is possible.

Shoot right through: FLASH protons could eliminate Bragg peak constraints

Shoot-through FLASH plan

FLASH radiotherapy, in which therapeutic radiation is delivered at a very high dose rate, shows promise for sparing normal tissues while maintaining the tumour kill seen in conventional radiotherapy. Most studies to date have been performed using electron beams, but FLASH irradiation could also be delivered using protons.

Proton therapy delivers a low integral dose to normal tissue and spares tissue behind the tumour, courtesy of the Bragg peak – the depth at which the proton beam deposits the majority of its dose. But this precision targeting brings its own problems: margins are needed to account for inexact Bragg peak positioning; robust planning is required to mitigate organ motion and anatomical changes; and uncertainties surround the distribution of linear energy transfer (LET) and relative biological effectiveness (RBE) near the Bragg peak.

So could combining protons with FLASH delivery eliminate these shortcomings? According to Frank Verhaegen from Maastro Clinic, it may actually enable a completely new way of delivering protons, one that doesn’t rely on the Bragg peak at all. “A while ago, I had the idea that if we could do FLASH protons, we could get rid of the ‘tyranny’ of the Bragg peak,” says Verhaegen.

The idea is that instead of positioning Bragg peaks inside the target, the proton beams have sufficient energy to travel straight through and exit the patient. This approach effectively positions the Bragg peaks distally in air, a tactic that’s already common in preclinical research where precision targeting is difficult inside small animals.

Instead of relying on the Bragg peak for tissue sparing, this shoot-through technique exploits the protective effect of FLASH irradiation on healthy tissues. “With FLASH, normal tissue is spared not because of the low dose, but exactly because it is normal tissue, which responds to ultrahigh dose rates in an entirely different fashion,” Verhaegen explains. “Then you can just shoot protons through the patient, and even use them behind the patient to perform proton portal imaging.”

Verhaegen and colleagues have now examined the use of shoot-through FLASH proton therapy in an illustrative brain tumour case, reporting their findings in Physics in Medicine & Biology.

Proof-of-principle

The researchers created a proof-of-concept proton plan for a patient with a neurological tumour close to several organs-at-risk (OAR) with strict dose constraints. The plan used four proton beams aimed at four fictitious targets placed outside the patient, with the beams optimized to deliver a roughly uniform dose inside the target. The team then compared this fictitious shoot-through FLASH plan with a conventional clinical four-beam proton plan.

Plan comparisons

For the shoot-through plan, the researchers assumed a hypothetical FLASH protective factor for normal tissues of 2. They note that the sparing effect observed to date in electron-based FLASH studies lies between 1.4 and 1.8. Higher protective factors have been reported for FLASH proton beams, though it is unknown if or why FLASH protons would exhibit greater protection.

Preliminary dose calculations showed that the shoot-through plans delivered an acceptable dose to the target. In most cases, the OAR dose constraints were met or almost met. And for some OARs, the FLASH effect offers potential to lower the effective dose below the planning constraints. The shoot-through beams increased the integral dose to the brain, but incorporating the FLASH protective factor of 2 reduced the effective dose to near that of the non-FLASH clinical plan.

The team points out that the treatment planning system used in this illustration was not optimized for shoot-through FLASH, and that an algorithm developed specifically for this novel modality should meet OAR constraints more easily.

Looking ahead

Clinical implementation of shoot-through FLASH proton therapy could provide a range of benefits, including the ability to perform dosimetry via proton portal imaging. Up to now, such verification of proton therapy has been elusive, although there is a great need for it. The shoot-through approach would also mostly remove the need for treatment margins and eliminate the problem of LET and RBE uncertainties.

Importantly, accelerators that can deliver shoot-through treatments for tumours in the head, neck and thorax, using 230–250 MeV proton beams, already exist. Abdominal treatments would need higher proton energies (300–350 MeV), though this would not require new accelerator technology, only an upscaling. Accelerators could also be considerably simpler than current implementations, for example, by removing the need for beam energy modulators.

The team notes that laser-based proton accelerators could offer a big advantage for FLASH because of their ultrahigh dose rate, but may require still significant development to reach high enough energies for shoot-through treatments.

Ultimately, the clinical impact of shoot-through FLASH proton therapy will depend heavily upon the FLASH protective factor, which remains an unknown quantity. “The protective factor will possibly depend on a lot of parameters, some of which may be unknown at this moment since we don’t have an explanation yet for the FLASH effect,” Verhaegen noted.

Following this proof-of-principle study, the team is now studying further neurological cases, including different sized tumours located close to sensitive structures, Verhaegen tells Physics World.

Quantum birds inspire new metrology for biosciences, particle physicist searches for the very small

Perhaps one of the most exciting discovery in biophysics in the past decade or so is that some creatures use quantum effects to sense the Earth’s magnetic field. In this episode of the Physics World Weekly podcast, Alex Jones of the UK’s National Physical Laboratory explains how this quantum navigation system is inspiring the development of new metrology technologies for the biosciences.

Also this week, the nuclear physicist Prajwal Mohan Murthy of the University of Chicago and Argonne National Laboratory is in conversation with the Physics World student contributor Shi En Kim . They chat about Murthy’s quest to measure two very small quantities – the mass of the neutrino and the electric dipole moment of nuclei and particles – and why making very precise measurements of these values could provide a glimpse of physics beyond the Standard Model.

New candidates for Kitaev spin liquids found

Two-dimensional materials known as rare-earth chalcohalides may be ideal candidates for creating so-called “Kitaev spin liquids” – exotic substances that could be used to build a fault-tolerant topological quantum computer. Experiments by researchers at the Chinese Academy of Sciences in Beijing and Lanzhou University found that the materials, which have the chemical formula REChX (where RE is a rare-earth metal; Ch is oxygen, sulphur, selenium or tellurium; and X is a halogen such as fluorine or iodine), could also be a platform for studying the fundamental physics of quantum spin liquids more generally.

Quantum spin liquids (QSLs) are solid magnetic materials that cannot arrange their magnetic moments (or spins) into a regular and stable pattern. This “frustrated” behaviour is very different from that of ordinary ferromagnets or antiferromagnets, which have spins that point in the same or alternating directions, respectively. Instead, the spins in QSLs constantly change direction as if they were in a fluid, even at ultracold temperatures.

Challenging to make

Kitaev spin liquids (KSLs) are a subtype of QSL that is known to be especially challenging to make in the laboratory. This is because, according to theory, they require a perfect (exactly solvable) two-dimensional honeycomb-shaped lattice in which to form. The spins in KSLs are also coupled via unusual exchange interactions. Such interactions are also responsible for the magnetic properties of everyday materials such as iron, and they occur between pairs of identical particles (such as electrons) – with the effect of preventing the spins on neighbouring particles from pointing in the same direction. KSLs are thus said to suffer from “exchange-coupling” frustration, rather than simple geometric frustration as in ordinary QSLs.

Another intriguing feature of KSLs is that they contain elementary excitations known as non-Abelian anyons, which are a prerequisite for fault-tolerant topological quantum computation. This type of computation makes use of quantum bits (qubits) defined in terms of fundamental shapes that cannot be easily deformed. These topologically protected qubits are not readily perturbed by their environment, so the information they contain remains intact (or “coherent”) for longer.

The rare-earth chalcohalides family

According to Qingming Zhang and colleagues, rare-earth chalcohalides could come into their own for making KSLs. In their new study, which is published in Chinese Physics Letters, they studied YbOCl crystals as well as polycrystals of SmSI, ErOF, HoOF and DyOF. They characterized the structures using X-ray diffraction and also measured their magnetic susceptibility, magnetization and heat capacity down to 1.8 K.

The researchers found that all the REChX compounds they studied are truly two-dimensional, with their 2D structure being held in shape by weak van der Waals forces between the material layers. They also have an undistorted honeycomb spin lattice. This lattice arrangement, combined with the fact that the 4f electrons in the rare-earth magnetic ions are coupled via strong spin-orbit interactions (the interactions between the intrinsic spin on an electron and the magnetic field induced by the electron’s movement), provides the anisotropic exchange coupling necessary for KSLs to form.

According to Zhang, the discovery of this new family of KSL candidates will give a substantial push to research on QSL physics. He and his colleagues say they are now trying to grow larger REChX single crystals in which they can study these materials’ spin ground state – that is, the one that remains coherent over macroscopic length scales. To this end, they also envisage extending their measurements down to millikelvin temperatures.

Muon mania: are we finally on the brink of new physics?

The global particle physics community has been energised by two recent results that offer tantalising glimpses of new physics beyond the Standard Model of particle physics.

Researchers at CERN’s LHCb experiment have observed something unusual in the way that B mesons decay into leptons – the class of fundamental particle incorporating electrons, muons, taus and their corresponding neutrinos. Meanwhile, researchers at Fermilab may have glimpsed an unknown force at work in the way muons “wobble” in the presence of a magnetic field inside their Muon g-2 experiment.

In this episode of the Physics World Stories podcast, Andrew Glester dissects these new results with the aid of particle physicists who discuss what this means for the field. Joining Glester in this episode are:

Patrick Koppenburg, leader of LHCb’s user analysis software
Jessica Esquivel, a physicist and data analyst at Fermilab
Mark Lancaster and Rebecca Chislett, UK physicists working on the Muon g-2 experiment.

Real-world tests of hybrid cars show higher-than-expected emissions

Hybrid cars consume more fossil fuels and emit more carbon dioxide in the real world than they do in lab tests – partly because drivers are not using the cars’ electric side as much as they could, researchers in Germany have concluded. To address this, the researchers suggest that authorities should implement policies that incentivize and facilitate more frequent charging.

Plug-in hybrid electric vehicles (PHEVs) combine an internal combustion engine with an electric motor. Their emissions are therefore lower than those of conventional vehicles, especially if the electricity used to charge the motor comes from a non-fossil fuel source. On the road, however, their actual fuel consumption depends on a combination of driving behaviour and how much of their mileage occurs in electric mode – the so-called utility factor.

To date, the question of exactly how environmentally friendly hybrid cars are has been clouded by a paucity of real-world data, with the accuracy of lab tests often the subject of controversy. In March 2021, for example, the UK consumer watchdog Which? concluded (based on test results from 22 popular hybrid cars) that PHEVs could use up to four times more fuel than industry standard consumption tests indicated. The worst offender, they reported, was the BMW X5, which was 72 per cent less efficient than claimed, equating to an added cost to the driver of around £669 per year.

Real-world data

In their new study, Patrick Plötz from the Fraunhofer Institute for Systems and Innovation Research and colleagues analysed data on the real-world fuel consumption, annual distance travelled and utility factor of more than over 100 000 PHEVs driven in Canada, China, Germany, the Netherlands, Norway and the US. These data were collected from past studies, online databases like Spiritmonitor.de and Voltstats.net that let drivers submit their own data, and firms that own hybrid cars – with the dataset including both private cars and company cars assigned to individual drivers.

The team found that, overall, the real-world carbon dioxide emissions of hybrid cars averaged between 50–300 gCO2/km — two to four times higher than emissions seen in test cycles. Similarly, actual fuel consumption was two to four times higher than test cycle consumption, “mainly due to low charging frequency”, Plötz explains. Utility factors used in official estimates (the Worldwide harmonized Light-duty vehicles Test Procedure and its regional predecessor, the New European Driving Cycle), are, he says, “based on driving data of conventional vehicles with additional assumptions and no empirical PHEV data…The assumptions on charging and on daily kilometres travelled with PHEV were too optimistic.”

The problem is compounded for company cars, the team note, because refuelling them is often free for drivers. Similar support is seldom available for charging costs, which instead come out of the driver’s pocket. The researchers also found significant regional variations in real-world utility factors. In Norway, for example, the high price of fuel and low electricity costs encourage more frequent charging than in other countries analysed. The US also had higher utility factors, which Plötz attributes to having “many drivers in our sample that are likely very environmentally oriented” and therefore motivated to charge more frequently.

Charging up

In the study, which is published in Environmental Research Letters, the researchers propose various ways to encourage drivers to charge their hybrid cars more frequently. “Private drivers need easy to install and use home charging infrastructure and purchase incentives should depend on the actual electric driving share,” Plötz suggests. “Company car drivers need clear financial incentives to charge at home, for example via low electricity prices and no free fuel cards.”

Ashley Fly, a vehicle electrification expert at Loughborough University, UK, who was not involved in the study, thinks there is also room for improvement in official tests used to certify emissions. Such tests are “designed around replicating how people drive vehicles on the road and not around how they fuel or charge these vehicles”, he notes. Drivers need to be better informed as to how their emissions vary with how much they charge their vehicle, he says, adding that one solution would be to publish emissions figures during the “charge sustain” mode of the official test. These would show the expected emissions and fuel economy if the vehicle was not plugged in to charge.

The new results should not, however, be viewed as a reason to reject hybrid vehicles, says Sam Akehurst, an automotive researcher at the University of Bath, UK, who was also not involved in the study. “In an ideal use cycle, a PHEV, correctly charged, can deliver zero tailpipe emissions, whilst still giving flexibility to deliver long distance travel without range anxiety or embedded cost/weight/CO2 of larger battery packs,” he says.

Machine learning could help slow epidemic spread

Machine learning could help stop a future pandemic in its tracks by indicating which individuals should be tested for the disease. That is the finding of physicists at the University of Gothenborg, Sweden and CNR-IPCF, Italy, whose neural-network-derived method proved far more effective than standard contact-tracing strategies at containing a simulated outbreak. Though the model has yet to be tested under real-world conditions, lead author Laura Natali says it could be especially useful in the early stages of an epidemic, when tests are scarce and little is known about how a new disease spreads.

In their study, Natali and colleagues began by dividing a population of 100 000 simulated individuals into three groups: those who are susceptible to the disease (S), those who are currently infected (I), and those who have recovered (R). During the simulation, these individuals move randomly around sub-regions of a 320 x 320 lattice of cells. At each time step, individuals within a certain radius of an infected person have a probability β of becoming infected, and a probability γ thereafter of recovering and becoming immune.

To capture the impact of asymptomatic disease carriers – a major feature of the COVID-19 pandemic – the researchers assign each simulated individual a temperature. The temperatures of infected individuals are, on average, higher than those of healthy individuals. However, the “healthy” and “infected” temperature distributions overlap substantially, making it impossible to determine an individual’s status by temperature alone. This means that tests are needed to identify which individuals are infected. The model assumes that these tests are accurate but not widely available, such that the number of individuals who can be tested (and, if infected, isolated) at each time step t is always much less than the total population.

Different strategies, different outcomes

Using this model, the researchers explored four possible scenarios. In the first, the disease spread unchecked through the population, with no containment measures. At t=150, nearly all individuals in this scenario had been infected.

In the second scenario, the researchers focused their limited testing capacity on individuals who had the most contacts (defined as being in the same cell) with others who had previously tested positive, using temperature data to break any ties. From time t=20 onward, all individuals who tested positive under this strategy were “frozen” in place and not allowed to interact with anyone else. This scenario is based on standard contact-tracing methods, and it produced a much lower peak in the infection rate. Still, the disease was not eliminated: at t=150, around 20% of the population remained infected, and thus capable of passing it on to the remaining susceptible individuals.

The third scenario mimics the strict lockdowns that many countries adopted to combat the spread of the SARS-CoV-2 coronavirus, which causes COVID-19. From t=20, all individuals in the “lockdown” scenario were frozen in place. This drastic action – isolating the entire population at once – kept infection rates very low and eliminated the disease entirely at t=120. However, the researchers note that such a comprehensive quarantine would be “unrealistic” in practice.

Enter the machine

In the final scenario, Natali and colleagues explored whether it might be possible to eliminate the disease while isolating only part of the population. To this end, they used a neural network to select which individuals to test. “In general, a neural network receives some inputs, elaborates them through of a series of hidden layers of artificial neurons, and returns an output,” they explain. “In our case, the input consists of contact-tracing information for a given individual n for the last 10 time steps.”

Based on the number of known infectious individuals within various distances of individual n, the number of actual contacts between n and these known infectious individuals, and n’s total number of contacts, the neural network outputs a value p: the probability that individual n is infected. If p=0, they are considered healthy. If p>0.995, they are immediately isolated. A value between 0.5 and 0.995 means they are prioritized for testing, beginning with individuals displaying the highest temperature until all available tests are depleted.

To boost the accuracy of the network’s predictions, the researchers “trained” it using data from t=20, when testing begins. Thanks to this training, the network’s predictive power improved over time, with striking results: the infection rate peaked at 5.1% of the population and quickly dropped to zero thereafter, even with no more than 25% of the population isolated. “We show that it is possible to use relatively simple and limited information to make predictions of who would be most beneficial to test,” Natali says. “This allows better use of available testing resources.”

Highly adaptable

The researchers note that their network makes no assumptions about either the disease or the underlying SIR (susceptible, infectious, recovered) model. This, they claim, means that it should adapt its predictions automatically to epidemics with more complex dynamics, such as a disease with an incubation period, delays in the testing process, or different patterns of individual movement.

Natali and colleagues showed that their model was also effective at suppressing an epidemic when individuals can get the disease more than once. “In the case of temporary immunization, the neural-network-informed strategy can prevent a disease outbreak from becoming endemic,” they conclude.

The research is published in Machine Learning Science & Technology.

Spinning brown dwarfs might have reached their speed limit

A team headed up at Western University in Canada has found three brown dwarfs that are spinning faster than ever seen before – completing a full rotation once per hour. The brown dwarfs were identified by NASA’s Spitzer Space Telescope and later studied with ground-based telescopes, such as the Gemini North telescope on the summit of Mauna Kea in Hawaii. The discovery, led by graduate student Megan Tannock, could help determine the fastest speed at which a brown dwarf can rotate before tearing itself apart.

So why are brown dwarfs so interesting? These substellar objects are more massive than planets but not quite as massive as stars. Like stars, they form from the collapse of a giant molecular cloud. But unlike stars, they do not have enough mass for nuclear fusion of hydrogen into helium to occur. The three newly-discovered brown dwarfs have roughly same diameter as Jupiter, but are between 40 and 70 times more massive.

Bright spots

Like stars and planets, brown dwarfs are already spinning at the beginning of their lifetime. As they age, brown dwarfs cool and contract, and their spin rates increase in order to conserve angular momentum.

The astronomers used the Spitzer Space Telescope’s IRAC instrument to observe each brown dwarf photometrically and measure the speed of its rotation. They did this by examining the brightness of features, such as spots on the surface, which changes according to whether the spot is facing towards or away from us. By measuring these repeated patterns of brightness variation, the team determined that the brown dwarfs were spinning with a period of one hour.

The spin rates of approximately 80 brown dwarfs have been measured to date, with varying speeds of up to tens of hours. The previously fastest known brown dwarfs completed full rotations every 1.4 hours. With their record-breaking rotation rate of once per hour, these three new brown dwarfs are spinning at an astonishing rate of about 100 km/s.

Tannock explains that, in order to confirm the results were “not a half-period caused by a repeated spot pattern”, the astronomers also measured spectral changes caused by the Doppler effect. They compared near-infrared spectral observations from the Gemini North and Magellan Baade ground-based telescopes with the spectra predicted from computer models.

Proposed speed limits

Considering the wide range of brown dwarf spin rates, it came as a surprise to the researchers that these three brown dwarfs shared similar rotation times. Assessing the age of each brown dwarf, using temperatures and surface gravities determined from their spectra, suggested that the three did not share the same age. The question then arises of why do they have similar spin rates?

The answer may lie in the centripetal force that’s generated by all rotating objects. Centripetal force increases as an object’s spin (and for a brown dwarf, its age) increases, eventually resulting in the object being ripped apart. Before this occurs, however, the midsection of the object will begin to bulge, a feature called oblation. The researchers measured this feature to determine how close the brown dwarfs were to the breakup point. Finding that the three brown dwarfs have similar degrees of oblation, they suggest that all three could be approaching a spin speed limit.

Tannock believes that “there is likely a braking mechanism, and brown dwarfs can’t actually spin so fast that they fly apart”. In other rotating cosmic objects such as low-mass stars, their magnetic fields cause large amounts of braking. “Brown dwarfs have strong magnetic fields in their interiors, and they are also fully convective on the inside,” says Tannock, noting that brown dwarfs could therefore exhibit a similar braking effect to low-mass stars.

Some scientists have suggested that a breakup would occur at a period of 20 minutes, with current models indicating that brown dwarfs could potentially spin 50–80% faster than these three. However, periods between 20 minutes and one hour are yet to be observed.

As such, Tannock suggests that “the minimum rotation period is around one hour”, and that further measurements could help determine whether there is a limit to the rotation speed. The future James Webb Space Telescope could possibly measure the low-amplitude variations, while theoretical work could help further understand the physics of brown dwarfs’ internal structures. “We are looking forward to seeing what people come up with,” say the study authors.

Copyright © 2026 by IOP Publishing Ltd and individual contributors