Skip to main content

Blood–brain barrier best breached by small molecules

Focused ultrasound (FUS) can be used to help drugs pass from the bloodstream into the brain, but the technique’s effectiveness depends on the ultrasound pressure and the size of the drug molecules. Michael Valdez and colleagues at the University of Arizona measured how thoroughly differently sized molecules diffused into mouse brains under a range of ultrasound intensities, and found that the largest molecules could not be delivered under any safe FUS regime. The results set a limit on the types of drugs that might one day be used to treat neurological conditions like Alzheimer’s and Parkinson’s disease (Ultrasound Med. Biol. 10.1016/j.ultrasmedbio.2019.08.024).

Usually, the brain is isolated from substances circulating in the bloodstream by the blood–brain barrier (BBB), a semipermeable layer of cells that permits only certain molecules to pass. This restricts the range of drugs that can be used in the brain to small, hydrophobic molecules (such as alcohol and caffeine), other small drugs like psychotropics and some antibiotics. Extending that range would open the door to new therapeutic possibilities, says Theodore Trouard, who led the team. “The ability to temporarily and safely open the BBB to allow drugs into the brain would help address a number of neurological diseases for which there is currently no effective treatment.”

Previous research has shown that such opening can be achieved by focusing an ultrasound beam in the brain while gas microbubbles circulate in the blood. The microbubbles – perfluorocarbon-filled lipid shells about 1 µm across – are inert while they move around the body, but rapidly expand and contract in the local pressure fluctuations caused by the ultrasound field. Mechanical forces exerted by this phenomenon create temporary gaps in the layer of cells that make up the BBB, giving larger molecules a chance to breach the brain’s defences.

More intense ultrasound fields produce a greater effect than weaker fields, and smaller molecules are more likely to diffuse into the brain than larger molecules. To quantify the relationship, Valdez and colleagues injected mice with dextran solutions containing molecules of three different weights: 3, 70 and 500 kilodaltons. They then administered a solution of microbubbles and subjected each mouse to FUS at one of three intensity levels.

When the procedure was complete, the researchers injected a fixing agent to preserve the distribution of the dextran molecules, then removed and sliced the mice’s brains for study. The method echoes past investigations conducted along similar lines, but in the earlier studies, each animal was given just one size of molecule, meaning inherent physiological differences between individuals made the results less certain. This time, Valdez and colleagues labelled each size of dextran molecule with a specific fluorescent marker, so that the distribution of all three could be measured simultaneously in each individual mouse.

Examining the brain slices using fluorescence microscopy, the researchers found that higher ultrasound pressures allowed the dextran to perfuse larger volumes of brain tissue. They noted, however, that the largest (500 kDa) molecules failed to penetrate the brain whatever the ultrasound intensity, and even the intermediate-weight (70 kDa) molecules only spread over small volumes.

Localized “hotspots” on the fluorescence images suggested that, though these dextran molecules had managed to breach the cells of the BBB, their size prevented them from diffusing further into the brain parenchyma. If large molecules like antibodies and other therapeutic proteins are restricted to the parts of the brain immediately next to where they exit the blood vessel, clinicians will need to get around the BBB at multiple sites simultaneously in order to access sufficient volumes of brain tissue.

Along with the three varieties of dextran, Valdez and colleagues also injected each mouse with a gadolinium-based MRI contrast agent. In some trials of the microbubble–FUS technique conducted until now, researchers have used this contrast as an easily detectable proxy for the therapeutic compound, confirming with MRI the volume perfused by the drug. Trouard’s team, however, showed that the small molecular weight of the contrast agent means that it diffuses into the brain much more readily than larger drug molecules, making it only a surrogate indicator of a procedure’s success.

Next, says Trouard, the researchers will investigate whether the BBB can be breached by a specific antibody that targets a protein associated with Parkinson’s disease, and whether it has any effect on the condition’s symptoms in mice.

The relentless march of renewables

In 2013 I wrote a book — Renewables: A Review of Sustainable Energy Supply Options – that was published by the IOP Publishing, which publishes Physics World. The book examined the different types of renewable energy solutions and for each one accessed progress, problems, impacts and opportunities. Yet so much has changed in the past six years, that the book is now out of date. So, for the past year I have been working on a second edition — a major update and extension, which I am pleased to say is now available to read.

As for the first edition, the book assesses the current state of play in renewables. It begins by looking at forms of renewable energy – such as hydro projects, wind, wave- and tidal-driven devices — in which natural energy flows are tapped and converted into mechanical power and then electrical power.  For these types, it hasn’t all been plain sailing. Hydro, which has so far seen 1.2 TW of installed capacity, has struggled with negative environmental assessments and in some cases unreliable rainwater supply due to climate change.

There have been huge breakthroughs in solar photovoltaics, which is now heading for 500 GW globally

Smaller projects, including “run-of-the-river” schemes without reservoirs, are often favoured by environmentalists, but projects with large reservoirs can play an important ‘pumped storage’ role in grid balancing. The same may be true of large tidal barrages and lagoons, but the former are usually opposed by environmentalists as being too invasive. Tidal current turbines have proved to be far more popular and also easier to develop than wave-energy devices, but many projects of all types are under development. We might expect gigawatts soon and eventually terrawatts.

However, Wind power, both on- and off-shore, has been the big new technology success. As costs fall dramatically and capacity heads for 600 GW, 5-10 TW or more may be possible longer term, especially if airborne devices work well. Large floating devices are a breakthrough technology that can operate in deep water far out to sea, and like offshore wind in general, avoid the land-use and visual intrusion issues that have sometimes constrained on-shore wind projects.

Hot topic

The book then looks at systems — like biomass, solar thermal, Concentrated Solar Power (CSP), geothermal – that use natural sources of heat either directly or to generate electrical energy. Biomass has faced even tougher land use and eco-impact constraints and the new book goes through the sometimes rather tortured debates over the impact of the use of forest-derived biomass on carbon balances and carbon sinks, and the impacts of vehicle biofuel production. Views clearly differ on whether biomass can be relied on as a major source of heat, power and transport fuel, but some look to the use of bio-wastes to avoid the land use problem.

A simple view is that energy storage will solve everything — especially as it’s getting cheaper. Sadly, it’s more complex than that.

Direct solar heat use has had far fewer problems. It is heading for 500 GWth globally, with heat stores offering a way to use summer solar heat in the winter — but at a price. Focused-solar CSP conversion to power has a large potential but has been less successful so far with only 5 GW installed capacity globally. Yet CSP does have a heat storage option, so that it can be run continuously. Other techniques such as solar chimneys, solar ponds and ocean thermal solar devices are also reviewed. They all hold some promise but are very location specific. Geothermal is also making progress (13 GWe, 28 GWth, so far) and the long-term global power potential is large at 2-3 TWs or more.

Rapid enrollment

Finally, there are devices, such as solar photovoltaics (PV solar), which convert solar energy directly into electricity. There have been huge breakthroughs in this area. PV solar is now heading for 500 GW globally, with costs falling very rapidly, so much so that multi-TW deployment is planned in the years ahead, maybe as much as 20 TW or more by 2050. Some of this is due to new technology – more efficient high-tech, multi-junction cells, some reaching up to 40% more conversion efficiency.

However, what has really changed is the advent of cheap, easier to mass-produce thin-film or dye-based cells, increasingly using non-toxic materials. They may have lower efficiencies, but they can be rolled out for many new applications – for example, for solar windows. PV is also being used for solar roads, solar carport canopies and now increasingly floating arrays, for example on reservoirs.

This helps to deal with one of the big drawbacks of PV — it takes up space. Unless, of course, you want to actually put PV arrays in space. That deals with the other big drawback of using solar — it gets dark on one side of the planet at night. However, there are cheaper options than launching PV into space and microwaving power back, with new storage options now emerging, along with new long-distance supergrid possibilities.

Balancing power

Renewables now supply over 26% of global electricity and another key topic that the book discusses are integration issues, including grid balancing, transmission and storage, followed by a roundup of global progress. Grid balancing and integration is an area that is expanding in importance but is complicated. A simple view is that energy storage will solve everything — especially as it’s getting cheaper. Sadly, it’s more complex than that. Batteries are getting cheaper, but they can only realistically store power for a short time — a few hours or days at most. They can also be used to deal with short-term voltage and frequency perturbations on the grid. But for longer-term variations and long lulls in power availability, you need large bulk storage systems. Pumped hydro can perhaps cope for a few days if it has large reservoirs. For longer than that you need something extra. Options include compressed air and hydrogen gas stored in vast underground salt caverns. Energy top-ups could also be obtained from overseas using supergrid links.

This new supply and storage system would be complimented with a new management system that is able to shift demand peaks to times when more power was available — for example, by variable pricing so that power costs more at peak times. Optimising it all will be hard it should be possible to move towards a balanced sustainable energy system.

The second edition of the book ends by taking on some anti-renewable contrarian viewpoints.  It also explores the scale issue. Some say we should stick to small, local-scale technology. No spoilers here, but the above should indicate that, although local projects can and should play a major role, they may not be enough — larger systems are also needed.  Also maybe it will be no surprise that nuclear is not seen as playing a major role, or fossil fuel carbon capture. Instead, like the first edition, the book tries to present a coherent case for renewables, which six years on, seems stronger than ever.

Heavenly vistas

Humankind has been entranced by the sky for as long as we have walked this Earth. The celestial theatre above our heads, and all its heavenly bodies, are intrinsically linked to our histories and philosophies. Astronomy and the study of the heavens have inspired artists as much as scientists since antiquity, and there exists a large body of work – from paintings and prints to sculptures and etchings – long before photography even entered the scene.

In Cosmos: the Art and Science of the Universe, art historian Roberta Olson and astronomer Jay Pasachoff join forces to put together a treasure-trove of astronomy art, from across time and space. The duo have been working together on the intersection of art and astronomy for over 30 years, and have previously written Fire in the Sky: Comets and Meteors, the Decisive Centuries, in British Art and Science (1998).

chromolithograph of solar protuberances and painting of solar rays

Cosmos is a large-format, glossy coffee-table art book, with more than 300 carefully curated illustrations. The works are divided into 10 chapters that, in the main, cover astronomical objects or phenomena such as comets, eclipses and aurorae. The opening chapter though is based on the practice, and “personification”, of astronomy itself; while the final chapter features a small collection of astrophotography as we know it today.

. I found myself flipping through page after page of alluring artworks, stopping at random to marvel at a particular image, and then being drawn in to discovering more about it – rather than attempting to read it cover to cover in one go. This is especially true for those who may not be interested in the very detailed art history in the book, which sometimes makes for dry reading – but worry not, for a beautiful image will soon distract you.

Painting of starry sky
Coloured drawing of phases of Saturn

In this article is a small selection of some of my most favourite images from the book. I was particularly intrigued by Étienne Léopold Trouvelot’s Solar Protuberances  – a stunning chromolithograph (a print-making technique popular in astronomical photography in the late 1800s) of the Sun. As the authors explain, Trouvelot embedded a grid in his telescope so that he could accurately transfer the images he viewed onto graph paper. In this 1873 image, he used a spectrograph tuned to the red light of hydrogen to study solar prominences – those massive, gaseous loops, extending like tendrils from the solar surface. There are a number of Trouvelot’s stunning images – from a total solar eclipse to meteor showers – in this book, all of which are scientifically accurate, as are others like astronomer Maria Clara Eimmart’s Phases of Saturn.

Other paintings and sketches are more whimsical flights-of-fancy inspired by the heavens, such as Henri-Edmond Cross’s Landscape with Stars, or the bemusing interpretation of a sword-wielding comet, or battles in the sky. It is clear that Cosmos is a labour of love, and a testament to the authors’ personal interest and knowledge in astronomy and the arts, this most ancient and natural of pairings.

  • 2019 Reaktion Books 304pp £35hb

How physics can improve blood pattern analysis

Police investigators analyse blood stains at crime scenes to build a detailed picture of what has taken place. Drips, smears and spatters are studied to work backwards to reconstruct the locations and actions of people and weapons involved. The procedure of Blood Pattern Analysis (BPA) has played a prominent role in murder trials, including those of sports star O J Simpson (1994–1995, verdict of not guilty) and music producer Phil Spector (2007–2009, retrial verdict of guilty).

But it is a field of forensics that has been called into question. A damning 2009 report from the US National Academy of Sciences (NAS) concluded that BPA lacks scientific rigour and valid accreditation for its practitioners. Since that landmark NAS report, physicists have been helping to develop deeper physical models to underpin BPA. Find out more in this article by physicist and science writer Sidney Perkowitz, originally published in the October 2019 issue if Physics World. Members of the Institute of Physics can enjoy the full issue via the Physics World app.

Jakarta’s rivers send 2000 tonnes of plastic into ocean each year

Jakarta’s rivers discharge more than 2000 tonnes of plastic into the sea each year. That’s about 3% of the Indonesian capital region’s annual “unsoundly disposed” plastic waste, i.e. any that ends up in the natural environment.

To come up with the figure, researchers in the Netherlands measured the amount and type of plastic flowing through the region’s five largest rivers. A better understanding of plastic transport in and around Indonesia’s capital could inform clean-up strategies for Java’s infamously polluted beaches, although the results will not necessarily carry over into other urbanized river systems.

Awareness of the problem of marine plastic pollution is increasing, but a lack of observations means it’s not clear how this material finds its way to the sea. Most studies conducted so far have focused on rivers in Europe and North America even though Southeast Asia is thought to be the region that contributes the most.

Addressing this discrepancy, Tim van Emmerik of Wageningen University in the Netherlands and colleagues looked at the situation in Jakarta. Here pollution from large coastal populations threatens the sensitive marine environment of the Coral Triangle, an area of exceptionally high biodiversity.

Over a two-week period in May 2018, van Emmerik and colleagues counted how much plastic was visible at five locations where Jakarta’s largest rivers run into the Java Sea. They also used nets to trawl for samples at the surface and at a depth of 0.5–1 m. In addition to these five river mouths, the team monitored two locations further from the coast so that they could estimate how much waste originates upstream of the city.

Most of the plastic waste came from Jakarta itself. Although the exact make-up of the material varied from site to site, plastic fragments larger than 1 cm were dominated by bags, films and foils, indicating a domestic origin.

“The plastics found to date are highly representative of typical household waste,” says van Emmerik. “This could mean that individuals dump waste directly into the water system. Or waste bins can leak, plastic can leak on the way from collection point to processing plant, etc.”

Indonesia’s rainy season is in January and February; river discharge can reach five times that of the flow rates seen over the observation period in May. To account for this variation, the researchers extrapolated from their measurements assuming a constant ratio of plastic to water.

“For the moment, this is the best working hypothesis available,” says van Emmerik. “Other studies have shown that plastic concentrations can increase, decrease or remain the same with increased river discharge.”

Based on this assumption, the researchers estimate that plastic entering the sea from Jakarta’s rivers makes up at least 1% of Indonesia’s total emission. This means that the city’s river catchments carry plastic at a rate of more than 20 times the national average on a per-area basis.

Identifying the scale and origin of Jakarta’s waste plastic problem will help remediation efforts on a local scale, but it is not clear how far the results can be generalized.

“So far it seems that every river has its specific plastic-waste DNA,” says van Emmerik. “Understanding plastic transport in detail will help specific cities and catchments in optimizing prevention, mitigation and collection strategies, but more observations are needed before we can generalize plastic sources, sinks, pathways and driving mechanisms.”

van Emmerik and colleagues reported their findings in Environmental Research Letters (ERL).

US dark-matter detector heads underground

The main component of the LUX-ZEPLIN dark-matter detector has been installed at the Sanford Underground Research Facility in Lead, South Dakota.

The central cryostat for the experiment, which weighs about 2200 kg, was successfully lowered some 1500 m underground last week. Over the coming months, the detector will be wrapped in layers of insulation and then next year be filled with around 10 tonnes of ultra-pure liquid xenon.

LUX-ZEPLIN is expected to begin operation in July 2020 when it will become the largest direct detection dark-matter experiment in the US. It will search for weakly interacting massive particles – a leading dark-matter candidate – with scientists hoping to capture flashes of light that are produced when dark-matter particles interact with the heavy xenon atoms.

LUX-ZEPLIN, funded by the US Department of Energy (DOE), is expected to be around 100 times more sensitive than its predecessor, the Large Underground Xenon experiment.

The DOE’s Lawrence Berkeley National Laboratory is leading the construction of the facility, which includes around 220 participating scientists from 38 institutions around the world.

 

Atomic spins on a surface make good quantum bits

Individual atoms on a surface can be used as quantum bits (qubits) for quantum computing applications. That is the claim of scientists at IBM Research who have shown that they can control the positions of each qubit with atomic precision by manipulating the atoms in a scanning tunnelling microscope (STM). Controlling the position of these qubits also allows the team to modify  interactions between pairs of atoms.

“This work is an important step towards using spins on a surface as qubits for quantum computing,” team member Andreas Heinrich tells Physics World. “The STM allows us to build essentially arbitrary structures of such atoms, which makes it possible for us to control how strongly they will interact with each other.”

Classical computers make use of bits that can have one of two values, “0” or “1”. As well as taking these distinct values, qubits can also exist in quantum states that that are superpositions of “0” and “1” at the same time. A quantum computer made from such qubits can solve certain problems faster and more efficiently than conventional classical computers. However, the quantum nature of qubits (their quantum coherence) is extremely fragile and can easily be destroyed by interactions with the surrounding environment.

Rabi oscillations

Now, IBM researchers led by Christopher Lutz have used the magnetic spin of a titanium atom to create a qubit that can point in either an up (0) or down (1) direction. They placed the atom on an ultrathin layer of magnesium oxide to protect the quantum nature of its spin and coaxed it into a chosen quantum superposition state. They did this by applying a time-varying electric field with a frequency in the microwave range to the titanium atom. These microwaves come from the tip of the STM and steer the atom’s magnetic direction.

“When tuned to the right frequency, this field can rotate the spin of individual atoms to any angle, where the rotation angle depends on how long we apply the microwaves,” explains Lutz. “This Rabi oscillation takes only about 20 ns to switch the qubit between 0 and 1 and then back again. This technique is known as electron spin resonance (ESR) and it is widely used for measuring the properties of magnetic materials. Here we have applied it to individual atoms.”

At the end of the process, the atom points either in a 0 or 1 direction or a superposition, depending on how long the researchers apply the microwaves. “The technique can create any superposition state we want and we can control and observe these spin rotations using the STM’s extreme sensitivity,” says Lutz.

This new work builds on a major breakthrough by the same group in 2015 in which it combined ESR with STM and used a voltage between the microscope tip and the sample as the driving field. This voltage oscillated at gigahertz frequencies and drove the spin resonance of individual iron atom placed on a magnesium oxide film. “We now show that we can coherently drive the spin of titanium atoms using microwave frequencies and perform several coherent (perfectly deterministic) spin rotations of the spins before their quantum coherence is lost,” says Heinrich.

Faster spin rotation

The larger the amplitude of the microwaves, the faster the spin rotates, he adds. “To quickly drive these oscillations from spin-up to spin down, we thus simply turned on the microwave and maintained it a high amplitude for 20 ns. The Rabi oscillation is a critical step to creating quantum superpositions and to show that we can use certain quantum systems as qubits.”

The story does not end there though. Since these single-atom qubits are highly sensitive to magnetic fields, they might also be used as quantum sensors to measure the weak magnetism or electric fields of nearby atoms, say the researchers. “Combined with our ability to move atoms with atomic-scale precision with the tip of an STM – a technique that was pioneered at IBM – we can now also probe the magnetic or electric fields of engineered nanostructures or unknown molecules with atomic-scale precision too,” says Lutz.

The team, which includes researchers from the Center for Quantum Nanoscience at the Institute for Basic Science (IBS) and Ewha Womans University, both in Seoul in Korea, and the Clarendon Laboratory at the University of Oxford in the UK, now plans to optimize the local environment of the atomic qubits to improve their quantum coherence time. “For example, we will try different surfaces and types of magnetic atoms,” says Lutz. “We would also like to design and build atomic structures containing more magnetic atoms to explore quantum entanglement for quantum simulations.”

The research is described in Science.

Microswimmers manipulate single particles and cells

The ability to precisely transport and position individual cells and microscopic particles in fluids could provide a powerful tool for a wide range of biomedical applications, including targeted drug delivery, nanomedicine and tissue engineering. With this goal, scientists in the USA and China have created acoustically powered bubble-based microswimmers that can manipulate individual particles and cells in a crowded environment without affecting nearby objects (Sci. Adv. 5 eaax3084).

The microswimmers comprise a 7.5-µm long, half-capsule-shaped polymer structure, with an outer diameter of 5 µm and a shell thickness of 500 nm. The researchers coated the capsules with a 10 nm layer of magnetic nickel, enabling them to be steered with magnets, followed by a 40 nm gold layer. They then modified the inner surface to be hydrophobic, such that when submerged in fluid, an air bubble is spontaneously trapped inside each capsule.

When exposed to a megahertz acoustic field, the encapsulated bubble pulsates and the microswimmers align themselves normal to a nearby boundary and statically hover on it. Applying an external magnetic field initiates translational motion, with both the direction and speed of this motion precisely regulated by the direction of the applied magnetic field.

“These microswimmers provide a new way to manipulate single particles with precise control and in three dimensions, without having to do special sample preparation, labelling or surface modification,” says Joseph Wang from UC San Diego, a senior author along with Thomas Mallouk from the University of Pennsylvania and Wei Wang from Harbin Institute of Technology.

The researchers tested the microswimmers in an acousto-fluidic chamber, with the acoustic field generated by a piezoelectric ceramic transducer and an external magnetic field provided by a cylindrical magnet (with a pulling force of 200 N) held next to the chamber. They showed that placing the magnet by the chamber caused the microswimmers to move in the direction of their closed ends.

At an acoustic pressure of 4 kPa, a level that does not generate obvious acoustic bulk streaming or trap passive particles, the microswimmers could be propelled at speeds as fast as 2.6 mm/s. When the acoustic field was turned off, they stopped moving immediately. The team notes that capsules without a trapped air bubble neither self-rotated nor translated in the acoustic field.

This strong propulsion and precise directional control enabled the microswimmers to selectively pick up individual particles and relocate them to arbitrary positions. For example, the researchers used an acoustic pressure of 300 Pa to propel a microswimmer through a collection of 4-mm diameter silica particles and steer it to push a target particle. The microswimmer separated its target from an adjacent particle, indicating fine control over individual particle manipulation in a crowded group.

At higher acoustic pressures, an attractive force can occur between the microswimmer and a particle. The researchers demonstrated particle transport via this pulling mode at an acoustic pressure of 1 kPa. Using a combination of the push and pull manipulation modes, they moved particles into arbitrary shapes, creating the letters “PSU”, for example.

To validate potential utility in bioanalytical applications, the researchers used a microswimmer to move HeLa cells in culture medium. The HeLa cell, which at 20 µm diameter is significantly larger than the microswimmer, could be transported to contact another cell in the medium without disturbing neighbouring cells. Finally, the team demonstrated that the microswimmers can climb up micro-sized blocks and stairs or swim in free space, allowing them to operate robustly and reliably in complex 3D environments.

The researchers conclude that the bubble-based acoustic microswimmers exhibited controllable 3D motion and precise particle manipulation and patterning in crowded environments – feats that have not been achieved with other microswimmer designs. Future improvements will include making the microswimmers more biocompatible, such as building them from biodegradable polymers, for example, and replacing nickel with a less toxic magnetic material such as iron oxide.

Anything to declare?

News emerged over the summer that the Israeli SpaceIL Beresheet lander – which crashed onto the lunar surface in April this year – was unknowingly carrying a payload full of reportedly indestructible biological samples. This consignment, known as the Lunar Library, was created by the Arch Mission Foundation – an organization whose goal is to project the culmination of human knowledge out to space but also, perhaps more bleakly, to provide a backup plan for if humanity goes under.

The package included a nanodisc containing a 30-million-page archive of human history, as well as a batch of biological specimens, including DNA extracted from human hair and red blood cells. Also on board, embedded within multiple layers of metal and specially slow-cured resin, was one of Earth’s most extreme survivors.

Tardigrades – also known as water bears – are hardy creatures that are ubiquitous in our environment. They are classed as “extremophiles” – organisms that exist in physically or geochemically extreme conditions that are detrimental to most other lifeforms on Earth. Frequenting mosses and lichens, these microscopic animals can survive extreme temperatures, pressures, radiation and even outright desiccation by entering a dormant state.

In 2007 an experiment led by Ingemar Jönsson of Kristianstad University, Sweden, and colleagues in Germany, saw a batch of desiccated tardigrades spend 12 days in orbit, aboard a free-flying capsule known as FOTON-M3. When the retrieved samples were analysed in the laboratory, it was found that the tardigrades survived the space vacuum very well. However, samples that were also exposed to some types of ultraviolet (UV) radiation didn’t have the same success, and samples that were exposed to vacuum and the whole gamut of UV wavelengths from solar light had an extremely poor tolerance. Of the three individual tardigrades that survived this exposure, none lived past a week.

The Beresheet tardigrades were in an inactive state, with a dramatically reduced metabolism. If they survived the initial lunar collision, it is important to note that the only way they can be reactivated is with water. Even if the smashed-open samples were ever exposed to water (perhaps if an icy comet collided with the impact site), they almost certainly wouldn’t survive without protection from harmful UV rays. Lastly, if they did, there’s no food up there to sustain them anyway.

While there is no threat of the tardigrades on the Moon rising up and attacking Earth, the saga has renewed interest in planetary protection discourse

So, while there is no threat of the tardigrades on the Moon rising up and attacking Earth, the saga has renewed interest in planetary protection discourse. The Committee on Space Research (COSPAR) has devised several categories for space missions, each increasing in complexity. The Moon is labelled as a “category II” destination (significant interest relative to the process of chemical evolution and the origin of life, but where there is only a remote chance that contamination carried by a spacecraft could compromise future investigations). This is a higher protection level compared to the concern-free base “category I”, but it still doesn’t require extra measures aside from rudimentary paperwork explaining mission details.

Introducing the tardigrade samples was not in direct breach of any of this. However, the story would be very different for other locations where there might be life, or traces of ancient life. From an astrobiology perspective, depositing extremophiles onto the surface of Mars could completely skew our search for extant life. This is the reasoning behind COSPAR making Mars a Category III (and in some cases higher) destination. This category explicitly states that a region with this designation is of great interest for life-detection missions, where any contamination would compromise scientific efforts.

The dried riverbeds we see on Mars today hint at the possibility that life may have once existed on the planet. Once the European Space Agency’s ExoMars 2020 mission is under way, scientists will be able to robotically gather samples, from up to two metres under the surface, thanks to the rover’s large drill attachment. The excavated samples may contain fossils, or nothing at all. The important thing here is to avoid any contamination, and make sure that anything we send to Mars arrives at the destination in as sterile condition as possible.

Imagine if even a few tardigrades manage to stow away inside a shielded craft to Mars, and travelled directly to the Martian surface; or even worse, to the surface of the drill. We would end up with a scenario of “finding” tardigrades in the soil – a false positive result. This would be catastrophic for science teams working on the projects, as they could really be led to believe that life existed independently outside of Earth. Mars is far enough away that it would be virtually impossible for anyone to clean the instruments and perform a re-test. You might wonder why we couldn’t just build an autoclave into the rover for in situ sterilization. Aside from being a huge technical challenge, some of the hardiest extremophile spores can even withstand that. Radioactive sterilization is theoretically possible, although the risk of contaminating Mars from isotopic sources is also something to avoid when visiting our cosmic neighbours.

The most straightforward route is to continue performing cleanroom protocols for outbound spacecraft and do our very best to keep equipment squeaky clean before exposing it to other worlds. This is standard practice for regulation-compliant space agencies, but is it now time to introduce hard legislation for private entities sending biology out into the cosmos?

Dark energy debate reignited by controversial analysis of supernovae data

The mysterious substance known as dark energy thought to be pushing the universe apart at ever greater speeds may be nothing more than an artefact of our acceleration through a local patch of the universe. That is the controversial claim of a group of physicists who reckon they have found flaws in the evidence underpinning the Nobel-prize winning discovery of cosmic acceleration. The dispute centres on exploded stars known as type Ia supernovae, which allow researchers to calculate cosmic distances and rates of expansion.

Type Ia supernovae are known as “standard candles” since they are generated by stars exploding with a very specific mass and therefore known absolute brightness. By observing these objects’ apparent brightness, astronomers can work out how far they are (in space and therefore time), and by combining that information with the red shift in their emitted light can then calculate how fast the universe was expanding at that point in time.

In 1998 two research groups – one led by Saul Perlmutter and the other by Brian Schmidt – announced that they had found evidence from some 50 distant supernovae that the expansion of the universe was not slowing down, as expected, but was in fact speeding up. They came to that conclusion after finding that the light from those supernovae was dimmer than anticipated. This extraordinary discovery implied that the universe was being pushed apart by (a still unknown) entity known as dark energy, and earned Perlmutter, Schmidt and Schmidt’s colleague Adam Riess the 2011 Nobel Prize for Physics.

Cosmic corroboration

This cosmic acceleration is now a central element of the Standard Model of cosmology and has been corroborated by other types of observational evidence, including data from the cosmic microwave background (CMB) – the cold and faint radiation generated shortly after the Big Bang.

However, in 2015 Oxford University physicist Subir Sarkar and two colleagues at the Niels Bohr Institute in Copenhagen uploaded a paper to the arXiv server claiming that the evidence for cosmic acceleration was not as watertight as generally supposed. Carrying out statistical tests on a sample of 740 type Ia supernovae, they investigated the empirical procedure used to adjust absolute brightness to account for variations in emission between supernovae as well as absorption of their light by intervening dust. They claimed to have found “only marginal” evidence for cosmic acceleration, calculating a statistical significance for such acceleration of less than 3σ. Normally, 5σ is seen as the gold standard for a discovery.

Their paper was published 16 months later in Scientific Reports and came in for much criticism from other scientists. Experts took aim at the statistical analysis itself, questioning Sarkar and colleagues’ assumptions about the properties of the supernovae involved, and criticized the group for considering the supernovae data in isolation. When combined with other data, including those from the CMB and the distribution of galaxies in the universe (known as baryon acoustic oscillations), critics argued that there could be no doubt about the reality of cosmic acceleration.

Upping the ante

But Sarkar and colleagues at the Niels Bohr Institute and the Paris Institute of Astrophysics have now upped the ante by writing a second paper, accepted for publication in Astronomy and Astrophysics and uploaded to arXiv. In it they further downgrade the significance of the supernovae evidence, arguing in fact that cosmic acceleration probably does not exist – that what the Nobel prize-winning teams saw was simply the result of local motion in our particular corner of the universe.

The researchers came to this conclusion after scrutinizing publically-available supernova data for a “monopole”, which would cause all points in the universe to accelerate away from an observer in a particular rest frame (that of the CMB). But they also looked for a “dipole”, which would mean some points moving away from an observer while others would move towards them. This is in fact how we see the CMB. Thanks to our motion through space as part of a group of galaxies travelling at over 600 kms-1 relative to cosmic expansion – known as the “bulk flow” – measurements of temperature in opposite halves of the microwave sky differ by about 1 part in 1000.

It was when considering the effect of this bulk flow that Sarkar and colleagues noticed what seemed to be an anomaly within the supernovae red-shift data. These data had been adjusted to convert red-shifts from the Earth’s rest frame to what was thought to be the CMB frame. However, that conversion, says Sarkar, assumed that all local motion gets washed out when moving to scales above about 500 million light-years. In fact, he argues, numerous observations of galaxy motion show that such movement persists up to at least 1::billion light-years.

Large dipole

By re-converting the red-shift data back to their raw “heliocentric” form as best they could, and plugging the data into a model, Sarkar and colleagues found that the monopole component – the universal acceleration – yielded just a 1.4σ signal, while the dipole – presumably a local motion – was present at 3.9σ. What’s more, they found that this dipole lines up with the one in the CMB.

“If you look at supernovae in only a small part of the sky, it would look like you had cosmic acceleration,” says Sarkar. “But we are saying that it is just a local effect, that we are non-Copernican observers. It has nothing to do with the overall dynamics of the universe and therefore nothing to do with dark energy.”

According to Riess, however, the supernovae data used by Sarkar’s group are out of date. He says that he and some colleagues, including D’Arcy Kenworthy of Johns Hopkins University, plugged data from a sample of about 1300 supernovae with lower systematic uncertainties into the model used in the latest work. The results, he says, were unambiguous, with the existence of a dipole rejected at more than 4σ and cosmic acceleration confirmed at over 6σ.

More importantly, says Riess, the objections against Sarkar and colleagues’ original statistical analysis still stand, as do the criticisms of neglecting other data. “The evidence for cosmic acceleration and dark energy are much broader than only the supernovae Ia sample, and any scientific case against cosmic acceleration needs to take those into account,” he says.

Even here, however, Sarkar insists the evidence is lacking. He claims that the data on baryon acoustic oscillations are too sparse to chose between models with and without cosmic acceleration, while dark energy would have been too weak to leave a significant imprint in the early universe. “The CMB does not directly measure dark energy,” he says. “That is a widely propagated myth.”

Copyright © 2025 by IOP Publishing Ltd and individual contributors