Skip to main content

Rare elements could be forged by neutron stars eaten by black holes

Precious elements may come from spinning neutron stars that have swallowed a tiny black hole and imploded. If true, this dramatically changes our understanding not only of how rare elements like gold are made, but also the nature of some dark matter.

The elements in question include all atoms heavier than bismuth, as well as some neutron-rich isotopes heavier than iron. They are forged in what is called the r-process (meaning ‘rapid’), which requires copious numbers of neutrons as well as densities ten billion times greater than those found in the Sun’s core to enable the rapid capture of those neutrons by atomic nuclei. Therefore, the r‑process can only take place in the most extreme environments.

In 1957, Burbidge–Burbidge–Fowler–Hoyle (known as B2FH) proposed that core-collapse supernovae were the origin of the r‑process elements, but in recent years this has fallen into doubt. Binary neutron star mergers have emerged as a frontrunner, but there’s a problem: with an estimated merger rate of one per 100 000 years, computer simulations struggle to recreate enough r-process elements.

If George Fuller of the University of California, San Diego and his colleagues Alex Kusenko and Volodymyr Takhistov of the University of California, Los Angeles are right, then it’s time for a new explanation. They propose that tiny primordial black holes could become lodged inside a neutron star’s core, where the black hole begins to consume matter and grow. As the neutron star’s interior swirls around the black hole that is eating it, the neutron star begins spinning rapidly and ejects up to a tenth of a solar mass of neutron-rich material into space. This dense material decompresses, allowing beta decay to transform some of the neutrons into protons, which is followed by the rapid formation of massive atomic nuclei.

Hypothetical black holes

The hypothesis, presented in Physical Review Letters, hinges on the requirement that “a few per cent or more of the dark matter is comprised of black holes,” Fuller says. Conceived by Stephen Hawking and theorized to have formed in the immediate moments after the Big Bang, researchers are yet to discover a primordial black hole, which would have a mass similar to that of an asteroid. If they do exist, they would follow the distribution of dark matter, with many of them in the galactic centre. Much like dark matter, they would barely interact with ordinary stars and planets. It’s only neutron stars that would be dense enough to capture them.

Nor should we worry about one hitting Earth. “In the entire history of our planet there is a chance of between 1 in 10 000 and 1 in 100 000 that one of these primordial black holes would pass through Earth,” explains Fuller. “It would just go right through and certainly wouldn’t be stopped by the Earth.”

Things now hinge on observations. Merging neutron stars produce gravitational waves as they spiral into a collision. The Advanced LIGO gravitational wave detector should be able to detect these final stages of a merger at a rate of at least 40 per year, out to a distance of 650 million light years.

“That’s the supreme court in my view,” says Fuller. “If we see binary neutron star mergers with LIGO, then we’ll get a thumbs-up or a thumbs-down on whether the merger rate is high enough.”

Smoking gun

Meanwhile, we may already have found evidence for neutron star–primordial black hole interactions without realizing it. Mysterious fast radio bursts (FRBs) could originate from the neutron star implosions. The destruction of neutron stars could also explain why there are fewer pulsars found in the galactic centre than expected, while beta decay during the r-process could provide the anomalous positron signal at 511 keV that comes from the centre of the Milky Way.

More direct evidence for neutron star implosions could come in the form of kilonovae – bursts of light with a tenth to a hundredth of the brightness of a normal supernova and which are currently thought to be the “smoking gun for a binary neutron star merger and r-process production”, says Fuller. However, if we detect a kilonova within 650 million light years without any accompanying gravitational waves, “that would be suspicious and would look a little bit like the destruction of a neutron star, either by our black hole scenario or by eating some other kind of dark matter and being destabilized”.

There could still be a bump in the road. Earlier this year, Tim Linden of Ohio State University and Joseph Bramante of the Perimeter Institute published a pre-print on arXiv suggesting that low-mass dark matter particles could also accumulate within neutron stars, causing them to implode.

Linden says that he and Bramante calculated the interaction rate between neutron stars and primordial black holes as being “significantly smaller” than that calculated by Fuller’s team. “This is primarily due to different assumptions for the velocity dispersion of neutron stars and primordial black holes very near the galactic centre,” Linden explains. The two groups plan to sit down in the coming months and address these differences.

From hype to hyperloop

In the late summer of 1864, anyone wanting to travel along the east side of Crystal Palace Park in London could buy a train ticket for sixpence – but this was no ordinary railway. Designed by the British engineer Thomas Webster Rammell, the Crystal Palace pneumatic railway consisted of a carriage that fitted snugly inside a tunnel, such that when a huge fan was turned on, the carriage was sucked from one end of the tunnel to the other. Average speeds of around 40 km/h meant that passengers could make the 550 metre trip in a little under a minute – twice as fast as the carriage’s horse-drawn competitors.

Rammell’s pneumatic railway was experimental, and it only ran for two months. A century and a half later, however, the idea of getting from A to B inside depressurized passages is back, thanks to another entrepreneurial visionary: Elon Musk, the South-African born, Canadian-American multibillionaire behind Tesla electric cars and SpaceX rockets. In 2013 Musk published a white paper outlining the concept of a hyperloop: an evacuated steel tube through which passenger “pods” travel cheaply and efficiently over continental distances. Thanks to the minimal air resistance, Musk claimed, the pods could be accelerated to speeds of up to 760 km/h.

The hyperloop sounds almost too good to be true, and many critics have said as much, branding Musk’s idea impractical, unsafe and – for various political and economic reasons – unrealizable. But in the four years since Musk’s white paper, at least three major start-ups have been created, and dozens of academics and industry professionals have climbed on board – figuratively if not yet literally. Their hope is to revolutionize public transport and, in so doing, restructure society for the better.

Simple on paper

Few deny the basic principles behind the hyperloop. At atmospheric pressure, air resistance mounts swiftly with speed, which is why supersonic jets tend to fly at high altitude. To avoid consuming huge amounts of energy, therefore, a near-sonic or supersonic vehicle at ground level needs an evacuated environment in which to travel. A tube is the obvious solution, although one containing a near vacuum would have to be resistant to the tiniest crack or leaky seal. For that reason, Musk proposed a tube containing merely low-pressure air, at about one millibar.

This residual air brings a problem, however, in that a snugly-fitting vehicle will, at high speeds, have to push an entire air column ahead of it – “not good”, in Musk’s words. The entrepreneur therefore proposed mounting a compressor fan in the nose of the hyperloop pod to transfer air backwards. In fact, he said, the air could even be channelled beneath the pod, creating a cushion for the pod to ride on, like an air-hockey puck. Meanwhile, contactless linear induction motors, placed at intervals along the tube, would supply an alternating magnetic field to accelerate the pod.

Musk claimed that he and his SpaceX company were too busy to work on a hyperloop themselves (other projects in the works include a plan to colonize Mars), but he encouraged others to pick up the baton. Within months a German entrepreneur, Dirk Ahlborn, obliged by setting up Hyperloop Transportation Technologies (HTT) in the US; hot on his heels came Shervin Pishevar, an Iranian-American entrepreneur who was reportedly responsible for persuading Musk to release the hyperloop white paper in the first place. Pishevar called his US company Hyperloop Technologies, though it was subsequently rebranded as Hyperloop One.

Both HTT and Hyperloop One claim to have amassed investments of $100m or more. Both, too, have revised various aspects of Musk’s original design, favouring different implementations of magnetic levitation, or “maglev”, over air cushioning. But concrete advances have been slower. HTT has gone quiet on previous claims that it would have a prototype hyperloop running as soon as 2018. Hyperloop One has delivered more visible progress, carrying out a linear-motor propulsion test just north of Las Vegas, US, in May 2016; on the other hand, its “first flight” of a fully functioning hyperloop, scheduled for early 2017, had not yet taken place at press time.

A third start-up, TransPod, entered the scene in 2015. Although this Canada-based firm has had less public exposure than HTT and Hyperloop One, co-founder Ryan Janzen believes it stands a better chance of success because none of its major components are going to come off the shelf; instead they are all being designed specifically to suit the needs of their hyperloop technology, drawing on expertise from across the rail, aerospace and space, and architecture sectors. “I like to say that we’re building a spacecraft that’s shaped like a plane, and operates like a train,” Janzen says.

TransPod hopes to deliver a “commercially viable product” by 2020, and has developed algorithms that can design optimal routes between cities, taking into account geography and existing infrastructure. One of the routes it is considering is the 550 km stretch between Toronto and Montreal, which currently takes one and a half hours by plane or up to six hours by car. A hyperloop, TransPod claims, could cut this journey time to 45 minutes.

A photo of Hyperloop One's test track in Nevada: a long tube in the desert

Like many hyperloop proponents, Janzen believes the infrastructure cost would be roughly similar to that posed by high-speed rail, which is seen as the main competitor. But many independent engineers are sceptical about this, given the cost overruns that often occur with major infrastructure projects, even when the technologies involved are well-established. (In the UK, for example, cost estimates for a proposed north-to-south high-speed railway have spiralled from £30bn to more than £80bn.) And as control-systems engineer Roger Goodall at Loughborough University in the UK explains, cost is not the only potential barrier. Among his concerns are the integrity of evacuated tubes over large distances, especially when tubes have to fork into different routes, and the possibility that passengers would have to stomach accelerations of 0.5 g on banked curves. “I suspect that working, eating and certainly moving around during the journey would not be a possibility,” he says. “Overall, it seems an interesting thought exercise for STEM students, [but] I am astonished by the substantial developments going on in the US.”

Others, though, have been less quick to dismiss the idea. Carl Brockmeyer, head of business development at the Germany-based vacuum technology company Leybold, read about hyperloops after they were first proposed and immediately wanted to get involved. “We’re not the type of people who say, ‘You’re crazy’,” he explains. “We’re the type of people who say, ‘Cool, how can we help?’” Leybold is now working with both Hyperloop One and HTT.

Brockmeyer isn’t fazed by the scale of the vacuum system required. He points out that Leybold helped to deliver the 27 km long vacuum system at the Large Hadron Collider at CERN on the Franco–Swiss border; that system needed pressures in the region of 10–11 millibar, some 11 orders of magnitude less than a hyperloop would require. “I don’t want to say ‘simple’, but let’s say it’s very achievable,” Brockmeyer says, referring to hyperloop’s pressure requirements. “We’ve delivered vacuum systems that are technically far more challenging.”

Old news?

Indeed, perhaps the hyperloop is not as cutting-edge as it appears. In the early 1980s researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) began investigating the possibility of creating an underground network of tunnels to connect the major cities of Switzerland. Known as Swissmetro, the system would have employed maglev trains travelling through reduced air pressures at speeds of up to 450 km/h.

The initial reception for Swissmetro was positive: a preliminary study was supported by the federal government, a more substantial analysis backed by the state and private sector followed, and by the late 1990s there were grounds for industrial development. But within a few years the government’s interest had waned amid claims that the system was not economically viable. Marcel Jufer, the engineer who led the EPFL group, believes the reason was that the government had already committed to building the Gotthard Base Tunnel, which runs under the Alps between Erstfeld and Bodio and is now, at 57 km, the world’s longest railway tunnel. After Swiss voters approved this north–south tunnel in 1992, there was simply no money left for an east–west Swissmetro, which Jufer says would have cost about the same. Whatever the real reason, in 2009 the Swissmetro company went into liquidation, although the EPFL group went on to discuss similar projects in South Korea and Belgium.

Should Swissmetro be taken as a salutary lesson for eager proponents of hyperloops? Jufer does not think that the new start-ups will necessarily suffer the same fate, but he knows not to underestimate the politico-economic challenges. HTT has bragged that landowners will welcome elevated hyperloop tubes running across their property in return for in-kind benefits such as free electricity, but Jufer believes the tubes would be better off buried underground to avoid any possibility of nimbyism. Though that might sound like a more expensive option, John Miles, an engineer at the University of Cambridge in the UK, points out that hyperloop tunnels would only need to be a fraction of the diameter of railway tunnels and so costs would be dramatically less.

An artistic impression of a station building for a hyperloop facility

Jufer also stresses the influence of vested interests, in the form of existing rail companies. Contrary to the vision of, for example, Hyperloop One, which is exploring routes between major European cities – partly, it seems, as a marketing exercise – Jufer believes a better place to start would be somewhere like Brazil, which does not already have strong rail infrastructure. “It takes a long, long time to overcome political problems,” he says.

Obstacles to overcome

Hyperloops have no shortage of other criticisms. Sceptics have claimed that the systems will be highly susceptible to everything from power outages and acoustic noise to earthquakes and terror attacks. Although some hyperloop proponents have compared the technology to airplane travel, “a plane does not travel at 1000 km/h a few centimetres from a steel wall,” observes Paolo Chiggiato, who leads the vacuum, surfaces and coatings group at CERN. “In case of a lack of electrical power, the vessel would inevitably touch the walls or the rails on which it is normally suspended. If a shock provoked a failure in the vessel tightness, the passengers would be rapidly surrounded in vacuum.” A pressure of 1 mbar, he notes, is equivalent to atmospheric pressure 50 km above the Earth’s surface – more than 10 times higher than the cruising altitude of a typical jetliner.

There has been backroom controversy, too: last year, Hyperloop One had to settle a lawsuit filed by one of its co-founders and three other employees alleging corporate malpractice. And within the academic community the debate has not always been constructive, as rail engineer John Preston at the University of Southampton in the UK found when he attended a transportation conference in South Korea in June 2017. “There was an interesting clash between the mainstream maglev supporters and the hyperloop ‘interlopers’ on comfort and cost, but with little clarity on either except for a diversion on virtual windows,” he says.

Despite these critiques, though, some experts think it is lazy to dismiss hyperloops out of hand based on the futuristic appearance of the technology. “The natural inclination of everyone is to say it’ll never happen,” says Miles. “And before you get excited about anything you should always do some calculations. But having done those, I found that I became more inclined to believe it could happen, rather than less inclined.” Miles persuaded his former employers, the international consultancy Arup, to begin offering expertise to Hyperloop One on a non-contractual basis.

Miles is well aware of the potential technological problems. “If you put high voltage inside a vacuum tube, you end up with what is effectively a strip light,” he jokes, by way of example. But he points out that each of hyperloop’s key components – propulsion, levitation, guidance, control and reduced pressure – are all technologies that have been well established in different spheres. The goal now is to get them to work in concert.

The new Tube

The Victorians would not have been daunted by such a challenge. While passers-by marvelled at Rammell’s pneumatic railway, engineers elsewhere in London were toying with the dubious idea of an extensive underground railway, at a time when most of the city’s inhabitants were still travelling by horse and cart. “The ability to effectively introduce the London Underground at the national scale, if you could do it, would quite simply transform the economic outlook for the UK,” says Miles. “Yes, it’s quite a challenge. But having spent a fair bit of time on this now, I’ve yet to see anything that I’d regard as a showstopper.”

New gold standard for determining protein structure

Naoki Kunishama and his team at RIKEN’s SPring-8 Center in Japan have found that a new technique to determine protein structures is better for structure-based drug design than conventional X-ray crystallography using synchrotron radiation (SR). The main advantage of this technique, called serial femtosecond crystallography (SFX), is that it can be performed at room temperature without significant noise from radiation damage, while SR relies on cryo-cooling proteins to reduce radiation damage to an acceptable level.

SFX works by exploiting ultrashort pulses of high-energy radiation delivered by an X-ray free-electron laser (XFEL), which is similar in size to a synchrotron but with a linear configuration. To date there are two XFEL facilities, one in California and the one in Japan used by Kunishama and his team, while a third one will soon be operational in Germany.

SFX has been optimized for protein crystallography over the last five years. It exploits diffraction from protein crystals, in the same way as synchrotron-based crystallography. Although the radiation used is so intense that it destroys the crystal after a few femtoseconds of exposure, the femtosecond speed of the procedure allows the diffraction data to be captured before radiation damage and the explosion of the crystal can add noise to the data. A full dataset is obtained by working through many crystals each at a fixed random angle, instead of rotating one crystal to take multiple images as in synchrotron techniques. A water- or oil-based stream moves the crystals in line for the radiation pulse.

SFX crystals are more reproducible

In this new study the researchers crystallized thermolysin, a very stable protein that has been used as a model protein for crystallography before, and soaked its ligand into the crystal (Acta Cryst. D73, 702–709). They used SFX to obtain three structures of the complex, and SR for the other two. Comparing the structures to each other, and to previously published structures, Kunishama and colleagues found that SFX yields structures that are more similar to the physiological conditions of both the protein and the water surrounding it. The SFX structures were also much more reproducible than the SR versions.

The big advantages of SFX for generating such highly reproducible structures are that it enables room-temperature operation and generates less radiation damage. In contrast, SR crystallography requires the crystals to be cooled in liquid nitrogen to keep radiation damage within acceptable limits, which in turn needs cryo-protection agents to prevent water crystals from forming inside the protein crystal and destroying it. Kunishama and his team showed that these cryo-protectants affect the conformations of amino acids in the crystal and consequently the observed structure, while cryo-cooling also shrinks the crystal cell dimensions by up to 2.8%.

The devil is in the detail

The authors believe that the more physiological structures obtained by SFX are a better basis for drug design than SR-based structures. Knowing the details of amino-acid conformations is crucial for predicting how drugs can bind to target proteins, especially those concerning the ligand and its binding site, and this study shows that these details might be altered by radiation damage or cryo-cooling in conventional SR crystallography. Kunishama and his team conclude that SFX is the better choice for structure-based drug design, as long as plenty of crystals are available to feed into an SFX crystal stream.

The American eclipse: wonder, science and festivities

 

by David Appell in Salem, Oregon, US

The Moon partially blocks the Earth’s view of the Sun at least twice, but the 21 August total solar eclipse – the “Great American Eclipse” –  is “likely to be the single most viewed natural phenomenon in history of America”, according to Randall Milstein, an astronomy instructor at Oregon State University. He says a total of 324 million people live within a 9-hour drive of the path of totality.

While the total solar eclipse will span the US – the first to do so since 1891 – the UK will only see a slight partial eclipse, where a sliver of the Moon covers the Sun. Starting over Belfast at 7:37 p.m. BST and leaving Plymouth at 8:33 p.m. BST, this partial eclipse will extend to eastern continental Europe. But it will only be a 4% blockage at best – so be sure to use eclipse safety glasses!

(more…)

Documentary explores the history of astronomy in China

By James Dacey

A new documentary explores the development of astronomy in China, taking viewers from the protoscience of ancient China through to the nation’s ambitious space exploration programmes of today. Directed by Beijing-based filmmaker René Seegers, the film has recently been broadcast on Shanghai Television along with screenings at a range of academic institutions, cultural and scholarly societies and embassies throughout China. Now, you can watch the film on the Physics World YouTube channel (with English subtitles).

“The Ancient Chinese believed that Heaven was a power, or a deity, which judged humans. Heaven was responsible for weather and for natural disasters. It was not a realm accessible to humans,” explains Ying Da, the documentary’s presenter. Ying is a media personality who shot to fame in China for directing the family sitcom I Love My Family (1993–1994).

Of course, in recent times Chinese scientists and engineers have taken a much more proactive approach to understanding the cosmos. Since the People’s Republic of China launched its first satellite in 1970 (Dong Fang Hong I), the nation has been ramping up its space programmes. The documentary takes viewers to observatories and the final construction phase of the Five-hundred-meter Aperture Spherical Telescope (FAST), the largest single-dish radio telescope on Earth. It also joins Chinese scientists in Antarctica and explores the leading role China is playing in the construction and operation of the Thirty Meter Telescope in Hawaii.

(more…)

Coherent neutrino scattering seen with compact detector

Detecting neutrinos is one of the hardest tasks in particle physics, owing to their extremely low interaction rate with particles. Huge quantities of matter are monitored just to catch a precious few events. Now, however, researchers have unveiled a new technique to catch neutrinos with much smaller detectors. It could potentially lead to extensions of the Standard Model of particle physics, and also have practical applications in nuclear non-proliferation.

Neutrinos were first detected in 1956 through inverse beta decay – an observation that would eventually win the 1995 Nobel Prize in Physics. They are detected by the weak interaction, which is mediated by the exchange of charged W and neutral Z bosons. A neutrino scattering off a proton exchanges a W boson, producing a neutron and a positron. Although this can provide valuable information, it can only detect neutrinos with fairly high energy.

Alternatively, one can detect the recoil of target particles exchanging neutral Z bosons with neutrinos. This was first achieved in 1973 at CERN’s Gargamelle detector in Geneva. The following year, theoretical physicist Daniel Freedman of the US National Accelerator Laboratory in Illinois predicted that the interaction of a low-energy neutrino could be around 100 times greater. “A particle with a long wavelength is essentially delocalized over a relatively large distance,” explains Juan Collar of the University of Chicago, “So the Z boson is effectively probing the whole nucleus and interacting with all the nucleons at the same time. To a good approximation, the probability of interaction scales with the square of the number of neutrons in the nucleus.” In principle, this so-called coherent scattering could allow a much stronger signal.

Grave difficulties

Unfortunately, a heavy, neutron-rich nucleus recoils only very slightly from the impact of a low-energy neutrino. Freedman wrote in his 1974 paper that his proposal of coherent neutrino scattering might “be an act of hubris, because the inevitable constraints of interaction rate, resolution and background pose grave experimental difficulties.” In the intervening years, however, dark matter astronomers have refined detection of low-energy nuclear recoils in detectors for hypothetical WIMPs (weakly interacting massive particles). “We have profited from all this knowledge,” says Collar.

Collar and colleagues in the COHERENT collaboration, which includes scientists in Russia, Canada, Korea and various parts of the US, performed their experiment at the most intense pulsed neutron source in the world at Tennessee’s Oak Ridge National Laboratory, where vast numbers of neutrinos are also generated. The researchers discovered a basement corridor well-screened from cosmic rays that, with the shielding (plus an extra 12 m of concrete and gravel), filtered out almost all neutrons. “Of course the neutrinos go through it like it’s not there,” says Collar. The researchers placed a target comprising just 14.6 kg of sodium-doped caesium iodide (traditional neutrino detectors require thousands of tonnes of material), and calculated the nuclear recoil from coherent neutrino scattering by measuring the increase in their signal during each pulse. Over 15 months, the researchers acquired clear evidence of coherent neutrino scattering.

Freedman, now at Stanford University, is impressed: “This process was in the background of people’s thinking,” he says. “If [the researchers] had ruled out a signal with confidence, it would have undermined not just the Standard Model but basic quantum mechanics.”

Long list of questions

The researchers have already provided new constraints to potential interactions between neutrinos and quarks that might result from some extensions to the Standard Model. Future results, says Collar, may help to answer key questions, such as whether the neutrino has an intrinsic magnetic moment and whether there are additional “sterile” neutrinos that do not interact through the Standard Model. “The list [of questions] is actually very long,” he says.

Theoretical particle physicists Joel Walker and James Dent of Sam Houston State University in Texas, who were not involved, are excited. “The neutrino sector is a very strange sector that has surprised the physics community several times already,” says Walker. Dent adds, “It would have been very surprising to people if the Standard Model signal had not been there, but that doesn’t mean it’s the only signal there. Part of our excitement is that now we can start testing this section of the Standard Model for beyond-the-standard-model physics.” Fellow theorist Patrick Huber of Virginia Tech has proposed the technique could detect “breeding blankets” placed around nuclear reactors to produce weapons-grade plutonium. “For many years, people like myself have been saying ‘Let’s assume we could detect coherent neutrino-nucleus scattering’. Now it’s happened, all these what-if scenarios become real!”

The research is published in Science.

Why being average is bad news for ants

For an ant that’s fallen into a pit dug in the sand by the larvae of “antlion” insects, the ability to climb up a granular slope is a matter of life or death. Now, a group of scientists in France has discovered why certain medium-sized ants are unlikely to make it out alive of these conical, centimetre-sized traps – no matter how hard they try. The physics of friction, the researchers found, dictates that these ants are heavy enough to deform a pit’s sandy slope but not so heavy that they create stabilising footprints. Instead, the unlucky creatures slide to the bottom of the pit and are eaten alive.

It was a 17th-century French scientist – physicist Guillaume Amontons – who formulated three laws of friction still in use today. The first states that the frictional force experienced by an object resting on a surface is proportional to that object’s gravitational force – and hence its mass. But in the latest research, Jérôme Crassous of the University of Rennes and colleagues have shown that friction’s dependence on mass is much more complicated for tiny objects on granular surfaces, and it is this complexity that determines the fate of ants in antlion pits.

Box of beads

Crassous and co-workers filled a box with glass beads of varying sizes, tipping the box each time so that the beads formed a slope with a range of angles approaching that at which avalanches most readily cascade down a mountainside – the angle that antlions also use in their pits. The researchers then rested small discs of metal covered in cardboard on the granular slope and recorded how easily the discs slid down it, repeating the exercise many times over. This they did for discs of various masses and with differing surface areas in contact with the slope.

The researchers found, as expected, that the probability for a disc to slide to the bottom of the slope increased as the slope’s angle approaches that characteristic of avalanches (the precise value depending on the size of the beads). What was far more remarkable, they report, was how that probability depended on the pressure that a disc exerts on the slope; in other words, on the disc’s mass. Rather than being independent of pressure, they found that for a given angle the chances of sliding were greatest at particular, intermediate pressures.

To understand what was going on, the researchers photographed the tracks that the discs made in the granular material (in their paper, they actually show tracks made in sand). Very small discs did not slide and so left no tracks, whereas intermediate-sized discs created tracks that became more visible as their mass increased. The most massive objects also created tracks, but because they built up ridges of granular material ahead of them they did not slide far.

Model creatures

Crassous and co-workers were able to model this behaviour mathematically. To investigate the onset of sliding at low masses, they assumed that the beads underneath a disc act like damped springs and that once the vertical force acting on them crosses a threshold the granular surface destabilises. At higher masses, instead, they devised an expression for the resistive force imparted on a disc by the accumulated beads, and then used that to work out the pressure at which sliding should cease.

The researchers also carried out a separate experiment in which they used a sensor to measure the frictional force on an object as it moved along a horizontal granular surface. They found, in agreement with their slope experiment, that the lowest coefficient of friction occurred for objects exerting a small, but not very small, pressure. As they point out, previous studies of friction on granular surfaces – which do not report such a dip in the coefficient – were carried out using larger masses.

Ant magic

Applying their work to the natural world, the researchers compared their results to those reported two years ago by a group comprising two of the current team (Antoine Humeau and Jérôme Casas of the CNRS Insect Biology Research Institute in Tours). That earlier research investigated which of various sized ant species – the biggest weighing in at 8 mg – an antlion could most readily capture in traps built from small glass beads. The easiest prey, in fact, were ants weighing about 2 mg. In other words, both studies indicate that it is ants with an intermediate mass that are most vulnerable to antlions.

Daniel Goldman, a biomechanics expert at the Georgia Institute of Technology in the US, praises the “creative” work of the French group, agreeing that it “points to novel granular slope physics which could be relevant to antlion prey capture”. He also believes the research could have practical applications, helping scientists to better tune robots’ motion so that they generate appropriate pressures when ascending sandy slopes on Earth and other planets.

The research is published in Physical Review Letters.

Cosmic-ray detector heads to the International Space Station

The Cosmic Ray Energetics And Mass for the International Space Station launches from NASA’s Kennedy Space Center in Florida (Courtesy: NASA TV)

NASA has launched a space-based probe that will study the origins of highly energetic particles, known as cosmic rays. Sent into space by a Space X rocket yesterday, the Cosmic Ray Energetics And Mass for the International Space Station (ISS-CREAM) will now be installed on the Japanese Experiment Module, where it will study cosmic rays for three years.

Cosmic rays zoom through space at nearly the speed of light and consist of a range of particles from protons to carbon atoms. When cosmic rays enter the Earth’s atmosphere they collide with another particle setting off a cascade of secondary particles. While Earth-bound detectors only see the secondary particles, a probe that is above Earth’s atmosphere will be able to spot the primary particles.

ISS-CREAM is a successor to six similar missions that have flown on long-duration balloons, which began in 2004 with the first flight of the Cosmic Ray Energetics and Mass mission. “The mysterious nature of cosmic rays serves as a reminder of just how little we know about our universe,” says Eun-Suk Seo from the University of Maryland, who is the lead investigator for ISS-CREAM. “This is a very exciting time for us as well as others in the field of high-energy particle astrophysics.”

Plasmonic nanoparticles boost light emission

Plasmonic nanoparticle arrays have the potential to improve the emission efficiency of solid-state lighting. Silver and gold nanoparticles were already known to enhance efficiency at visible wavelengths, but they require a complicated, multi-step fabrication process. Now, researchers at the University of Michigan in the US have included plasmonic gallium (Ga) nanoparticle arrays at buried interfaces within semiconducting layers. The nanoparticles can be easily incorporated into targeted areas in a wide range of semiconductor devices. The resulting structures show improved photoluminescence efficiency for emission wavelengths from near-infrared to ultraviolet.

Writing in Journal of Applied Physics, lead authors Myungkoo Kang and Sunyeol Jeon describe how an array of Ga nanoparticles was produced by rastering a Ga+ focused ion beam across a gallium arsenide (GaAs) substrate. The trick is to tilt the beam to an off-normal angle to the substrate, which prompts Ga nanoparticles to self-assemble in close-packed arrays. The diameter of the Ga nanoparticles was controlled by the angle of incidence of the beam on the GaAs surface. Kang and Jeon then grew a layer of GaAs on top of the substrate and the nanoparticles using molecular-beam epitaxy, embedding the Ga nanoparticles within the GaAs material.

To investigate the effect of nanoparticle diameter and overgrown GaAs layer thickness on photoluminescence efficiency, the researchers used a combination of photoluminescence spectroscopy and electromagnetic computational simulations. They found that the optimal combination of nanoparticle diameter and embedment depth led to improved photoluminescence efficiency compared to high-quality GaAs epilayers without embedded nanoparticle arrays.

Polycrystalline or not?

Structural characterization by transmission electron microscopy revealed that the Ga nanoparticles were amorphous, while the overgrown zincblende-structured GaAs layers were polycrystalline. Usually polycrystallinity is undesirable in such devices, so the team is now working on improving the crystallinity of the overgrown material.

As the researchers point out in their paper, however, “polycrystalline III-V compound semiconductors have been proposed for LEDs in large-area displays. Indeed, this new Ga nanoparticle plasmonics approach would enable polycrystalline gain media deposited on large-area substrates to maintain reasonable light-emitting characteristics. Thus, this approach provides an opportunity to enhance the photoluminescence efficiency from a variety of semiconductor heterostructures.”

The work is detailed in Journal of Applied Physics 10.1063/1.4990946

Structure and mechanics determine cell performance

The environment that surrounds cells in a tissue or organ, the extracellular matrix, is arguably just as critical to cell function as the cell itself. Researchers from the University of Illinois at Urbana-Champaign have now devised a lab-based method to alter the stiffness of this surrounding architecture, and have shown that softer environments increase the functional properties of vascular cells (Biomaterials 140 45). They also found that applying a mechanical force to the cells – designed to mimic the effect of blood flow over vascular cells – increased functionality in cells with stiffer surrounding environments, which are typically associated with aged or diseased vascular systems.

Deborah Leckband and her team showed that a combination of extracellular stiffness and mechanical force disrupts the vascular system, which then initiates a remodelling of the intracellular architecture. They assessed the functionality of vascular cells by measuring the distance between specialized proteins that sense changes in the physical environment surrounding cells, and then translate them into signals that alter cell function. The researchers found that disturbing the receptors of these so-called gap-junction proteins with a mechanical force had a negative biophysical effect on the cells, confirming that both mechanical and physical properties surrounding a cell affect its functionality.

Stiffness regulation

The team used special biomaterials called hydrogels to alter the stiffness of the extracellular matrix surrounding vascular cells. Hydrogels offer extremely useful properties, since they have the structural properties of a solid but can also attain a water saturation of more than 99%. By altering the concentration of the primary material within the hydrogel, the researchers were able to vary the stiffness of the surrounding environment between 1.1 kPa and 1 GPa.

In these experiments, the cells were placed on top of a hydrogel, but other researchers have encapsulated cells within a hydrogel to provide a 3D environment. In future, this approach could also be used by the Illinois group, and it can also be applied to many different types of cells, not just to vascular cells.

Vascular translation

With this new research, Leckband and her team have provided a better understanding of how the biophysical properties surrounding vascular cells affect their function. By probing the effect of increased stiffness on vascular cells, they have shown how cell function can be disturbed by the stiffening of the vascular architecture that’s observed with age. The results could also lead to improved strategies for modelling disease, since in vitro models could replicate vascular diseases more effectively by using stiff hydrogels to mimic aged and diseased vascular environments.

Copyright © 2026 by IOP Publishing Ltd and individual contributors