To survive, polar bears need to gain weight, not lose it. With a longer summer and less sea ice, it’s a lot harder to do that.
Polar bears may be having a harder time than anybody thought. Biologists who monitored the hunting habits of the Arctic’s iconic predator found that bears have a faster metabolism – that is, they need high-energy foods more often – and are likely to lose weight just when they should be getting fat and ready for the winter.
Ursus maritimus is famous for going without food for long periods and then making up for it when the going is good. And for a polar bear, the going only gets good when there is a lot of sea ice and rich pickings among the seal population.
But US Geological Survey scientists who fitted monitors and video cameras to nine female polar bears for periods of 8 to 11 days and then tracked them on the ice of the Beaufort Sea, north of Alaska, report that five of their “volunteers” had lost weight.
Four of them had lost 10% of their body mass: that is, they could not catch seals often enough to put on weight. One had even lost muscle tissue.
The study confirms that bears are vulnerable to climate change. Sea ice minimum levels are falling at the rate of 14% a decade in the Arctic, and polar bears have been feeling the loss.
But because the bear is a sit-and-wait predator, hunting ringed seals or bearded seals for preference as they haul out onto the ice, biologists had assumed that a resting bear would have a low metabolic rate. Not so, according to a new study in the journal Science.
The bears are active about one third of the time and use energy swimming and walking. The tests and observations were made during the period from April to July when bears catch most of their prey to store up the body fat they need.
In fact the instrument readings and tests of urine and blood samples told the scientists that the metabolic rate of a bear was more than 50% above previous calculations.
So a female seal out on the ice in the polar spring would need to eat one adult ringed seal, or three subadults, or 19 newborn seal pups, every 10 or 12 days just to stay as she was, at the same bodyweight.
Fall explained
But to succeed and breed in the winter, a female would ideally need to consume so much seal blubber that her fat levels matched her lean body mass. In April on the Beaufort Sea between 2014 and 2016, the bears in the test study had no great luck.
“We found that polar bears actually have much higher energy demands than predicted. They need to be catching a lot of seals,” said Anthony Pagano, a doctoral researcher from the University of California Santa Cruz, and a wildlife biologist with the US Geological Survey, who led the research.
Bear population in the Beaufort Sea has fallen by about 40% in the last decade. Now, biologists are beginning to see why.
“We now have the technology to learn how they are moving on the ice, their activity patterns and their energy needs, so we can better understand the implications of these changes we are seeing on the sea ice,” he said. – Climate News Network
Proton therapy is a limited resource that’s available to relatively few cancer patients. But as more cancer centres install single-room proton systems into their radiotherapy clinics, combined proton–photon therapy could provide a practical alternative. The question then becomes: “how can a given number of proton therapy slots be used optimally in a proton–photon treatment?”
In previous studies of combined treatments, intensity-modulated radiotherapy and proton therapy (IMRT and IMPT) plans were optimized separately and then simply combined. Now, a research team headed by Jan Unkelbach from University Hospital Zürich has demonstrated that simultaneous optimization of the two modalities can capitalize better on proton therapy’s ability to reduce normal tissue dose (Radiother. Oncol. 10.1016/j.radonc.2017.12.031).
“In this work, we specifically look at treatment sites where a certain number of fractions is needed to protect dose-limiting normal tissues within or near the target volume,” Unkelbach explained. “These can only be protected through fractionation, and hence both proton and photon fractions should deliver the same dose to these tissues.”
However, parts of the gross tumour volume (GTV) may be eligible for hypofractionation. In this case, most of the dose to the GTV can be delivered with protons. Thereby, healthy tissues away from the target volume benefit from an overall reduction of the photon dose bath.
Case study
Unkelbach and co-authors demonstrated this concept for a sacral chordoma patient in whom the GTV abuts the rectum, bowel and bladder. Chordomas require high radiation doses to achieve local control, and are treated with simple proton–photon combinations at some institutions (such as MGH in Boston). For this patient, most of the GTV could be hypofractionated with protons, however, protecting the rectum, bowel and bladder required fractionation.
To account for fractionation effects, the researchers simultaneously optimized IMPT and IMRT plans based on their cumulative biologically effective dose (BED). While traditional plan optimization evaluates objective and constraint functions for physical dose, in this approach, these same functions are instead evaluated for cumulative BED.
To quantify the benefit of optimized proton–photon treatments, the researchers initially optimized a 30-fraction IMRT plan and a 30-fraction IMPT plan, based on the same set of objectives. They used these single-modality plans to generate a reference plan representing a simple proportional combination, in which the two modalities deliver the same target dose.
The team also created two optimized proton–photon plans: combination 1, which used the same objectives as the single-modality plans; and combination 2, which emphasized integral dose reduction by increasing the weighting for objectives that minimize dose to OARs and remaining healthy tissue.
Dose comparisons
Optimized combination 1, with 10 IMPT and 20 IMRT fractions, generated a conformal treatment plan that delivered the prescribed BED to the target volume. The proton and photon fractions in this plan delivered similar doses to the bowel and rectum overlaying the target, thus protecting these normal tissues through fractionation.
In the GTV, on the other hand, photon fractions delivered a mean dose of 1.7 Gy while proton fractions delivered 3.6 Gy, thereby lowering the integral dose to the gastrointestinal tract compared with the reference plan.
The mean BED4 (assuming an α/β ratio of 4 for healthy tissues) to the bowel was 13.12 Gy for a 30-fraction IMRT plan and 4.85 Gy for the IMPT plan. The reference plan, with 10 proton and 20 IMRT fractions, yielded a BED4 of 10.41 Gy, corresponding (as expected) to approximately one third of the dose reduction possible using protons alone. The optimized combination 1 plan delivered a mean BED4 of 8.63 Gy, corresponding to 54% of the reduction possible with IMPT.
As optimized combination 1 uses the same objective function as the single-modality plans, improvements over the reference plan are distributed over multiple objectives. By shifting most of the benefit to reducing integral dose, optimized combination 2 achieved 78% BED reduction compared with the reference plan, without degrading target coverage or conformity.
Dose-volume histograms showed that the five plans (IMRT, IMPT, reference and optimized combinations 1 and 2) delivered similar target coverage and were similar in the high-dose region of the bowel and rectum, which overlap with the PTV. The main difference was observed in the low-dose region of the bowel, where the optimized combinations improved upon the reference plan.
Finally, the researchers varied the number of proton fractions in a combined 30-fraction treatment from one to 20. The mean BED reduction in the bowel increased with increasing number of proton fractions. With five IMPT fractions, optimized combinations 1 and 2 achieved BED reductions of 36% and 66%, respectively, levels that would require 11 and 20 IMPT fractions in a simple proportional combination of plans.
Clinical options
The authors note that optimized proton–photon therapy could benefit many other clinical indications, such as spinal metastasis with epidural involvement, for example, or large liver tumours abutting the bowel, stomach or duodenum.
“As a next step towards implementation, we will look at the robustness of combined proton–photon treatments with respect to range and setup errors, and apply robust plan optimization techniques to mitigate such uncertainties,” Unkelbach told medicalphysicsweb.
Optogenetics, in which neurons are engineered to be activated or inhibited by light, provides valuable insights into underlying mechanisms of brain function and holds promise for the treatment of neurological disorders. However, the blue-green wavelengths required to turn neurons on or off scatters strongly and cannot penetrate deep into the brain, necessitating the use of fibre probes to deliver the light.
To avoid this invasive approach, a research team headed up at the RIKEN Brain Science Institute is investigating the use of upconversion nanoparticles (UCNPs) to enable delivery of laser light from outside the skull. These UCNPs absorb near-infrared laser light, which can penetrate deeper into brain tissue, and emit the blue-green photons required for neural stimulation (Science359 679).
“Optogenetics has been a revolutionary tool for controlling neurons in the lab, and hopefully someday in the clinic,” said research group leader Thomas McHugh. “Unfortunately, delivering light within brain tissue requires invasive optical fibres. Nanoparticles effectively extend the reach of our lasers, enabling the ‘remote’ delivery of light and potentially leading to non-invasive therapies.”
In tests in mice, the researchers demonstrated that the UCNPs could serve as optogenetic actuators of transcranial NIR light to turn on neurons in various brain areas. Electron microscopy showed that UCNPs injected into mouse brains remained localized in the injection area.
Non-invasive activation of neurons in the mouse brain
“The nanoparticles appear to be quite stable and biocompatible, making them viable for long-term use,” said McHugh. “Plus, the low dispersion means we can target neurons very specifically.”
In addition to activating neurons, the UCNPs could also be used for inhibition, for example to silence seizures. The researchers injected nanoparticles tuned to emit green light into the hippocampus and energized them with laser pulses at the surface of the skull. Hyperexcitable neurons were effectively silenced in these mice.
In another brain area, the medial septum, nanoparticle-emitted light contributed to synchronizing neurons in an important brain wave called the theta cycle. And in mice with learned fear memories, the freezing behaviour associated with these experiences was evoked by blue light-emitting UCNPs, also in the hippocampus.
These neural activation, inhibition and memory recall effects were only observed in mice that received nanoparticle-mediated optogenetic stimulation, not in control animals that received laser light without the UCNP injection.
The nanoparticles are compatible with various light-activated channels currently used in optogenetics and can be employed for neural activation or inhibition in many deep-brain structures. The authors note that the technique could, one day, complement or extend current approaches to deep-brain stimulation and therapies for neurological disorders in humans.
Why are halide perovskites so efficient at converting sunlight into energy? New single-particle measurements have revealed that low-energy states lying below the bandgap of these materials may be the reason. The work could help more accurately design improved perovskite materials for solar cell applications.
Halide perovskites have the chemical formula ABX3 (where A is typically methylammonium, formamidinium or caesium, B is lead or tin, and X iodine, bromine or chlorine). They are one of the most promising thin-film solar-cell materials available today because they can absorb light over a broad range of solar-spectrum wavelengths. They also have a low exciton (electron-hole) binding energy and high charge-carrier mobility. Indeed, the power-conversion efficiency (PCE) of solar cells made from perovskites has soared from just 3% to more than 22% in the last five years. This means that their PCE is now comparable to that of silicon-based solar cells.
Despite such impressive progress, researchers are still unsure as to why these materials are so efficient at converting sunlight into energy. One theory suggests that energetically stabilized states involving polarons (quasiparticles in which electrons and holes carry phonons) are responsible. However, direct evidence of such particles, or indeed other low-energy states (which should lie below the bandgap of the material and show up in photoluminescence measurements) were lacking until now.
Free from traps and defects
A team led by Elad Harel of Northwestern University in the US has used a technique called single-particle transient absorption microscopy on the halide perovskite MAPbI3 (where MA is methylammonium) to identify such sub-bandgap states directly for the first time. These states become rapidly populated by charge carriers when the material is excited with light. The researchers also suspect that they are free from traps and defects, which can adversely affect the power-conversion efficiency of these thin film materials.
The technique used by Harel’s team is very different to traditional spectroscopic methods used to study perovskites and related photovoltaic materials that average out the photoluminescence spectra of thousands of single particles. “These techniques only measure the average physical properties of a material whereas a single-particle approach, like the one we used in our work, provides not only the average properties but how these properties are distributed as well,” explains Harel.
A distorted view of reality
“Imagine that we were only able to measure the average temperature of our planet, rather than the spatial distribution of temperatures,” he says. “Such measurements would lead us to think that every location on Earth was at 14°C, which is clearly a distorted view of reality – it is below freezing in Chicago right now, but over 30°C in Sudan.
“It is the same for heterogeneous materials like perovskite thin films: the carrier properties may vary across even a few microns in space and single-particle measurements allow us to see features that are normally hidden in the ensemble. Such measurements were key to identifying the low-energy states that we observed.”
The work proves that average properties do not give us the full picture of what is happening at the nanoscale, he tells nanotechweb.org. Such an incomplete picture means that we may put forth physical mechanisms that are simply wrong and then use these incorrect mechanisms to try and fabricate improved materials. This not only leads us down the wrong path but also wastes tremendous resources – time, money and energy.”
Towards a holistic picture of carrier transport
The researchers say that they are now busy studying a wider range of 2D and 3D perovskites with their colleagues in Mercouri Kanatzidis’ lab at Northwestern. “We are now able to correlate the transient absorption microscopy results with high-resolution scanning measurements to find out how specific structures in these materials affect charge-carrier behaviour,” says Harel.
“We need a holistic picture of carrier transport so that we can produce a correct and consistent mechanistic picture of perovskites. This, we believe, will ultimately translate into improved solar cell materials down the road.”
In most Hollywood movies, interplanetary travel seems fairly straightforward: hop on a spaceship, blast off, fly through space (with or without hibernation that may or may not go awry), land on foreign soil. But throw in the known and unknown hazards of deep space physics, multiplied by the limitations of the human body, and the adventure becomes decidedly more complicated.
Yet something about going to Mars has captivated the space-curious for generations; from scientists who want to build the spaceships and go, to politicians who can approve the spending. “Mankind is drawn to the heavens for the same reason we were once drawn into unknown lands and across the open sea,” said US President George W Bush in 2004, when he proposed spending $12bn to get to the Moon by 2020 as a stepping stone to Mars. Not to be outdone, President Barack Obama announced in 2016 that he wanted to get people to Mars by 2030, and more recently, President Donald Trump signed a bill authorizing $19.5bn to go towards NASA’s quest to have humans visit Mars. (“You could send Congress to space,” one senator quipped at the signing.)
Getting people to Mars is more than just political hyperbole, however, and that vision is slowly shifting from science fiction into science fact. This summer, the European Space Agency (ESA) plans to pack and ship a 4 m-tall and 5 m-diameter cylindrical space vehicle from Bremen, Germany, where it’s been under construction for the last four years. It will be sent to NASA’s Kennedy Space Center in Florida, where it will become an integral part of the most ambitious plan for space exploration ever hatched by humankind.
Orion’s quest to Mars
The huge cylinder is the service module for the Orion spacecraft and, once out of Earth’s atmosphere, it will extend solar panels stretching 19 m across to power Orion during its deep space travels. Inside the module, arranged like a dense 3D puzzle, are the wires, cables, devices and materials needed to support human beings as they voyage into deep space, including fuel, air, and water. Sitting atop the service module and measuring 3.3 m in height, will be the crew module, the astronauts’ home for their long journey.
The inside of Orion’s service module is a complex maze of technology. (Courtesy: ESA)
For Orion, the ultimate goal is Mars. Despite being our nearest planetary neighbour, the red planet’s orbit keeps it an average of 225 million km away. (In theory, it could get as close as 55 million km, but that’s never happened.) Such a physical distance is staggering, even for physicists who think they’re familiar with the size of the solar system. “If I ball my hands together to make a globe and say that’s the planet Earth, then the space station is in orbit about half a centimetre above my hands, so it’s about the length of the hairs on the back of your hand,” says David Parker, director of ESA’s Human and Robotic Exploration programme, which has almost finished building Orion’s service module. “That’s where we are with the space station. The Moon is about 5 m away on that scale. But Mars – depending on where it is in its orbit relative to Earth – is somewhere between half a kilometre and three kilometres away.”
The journey will be arduous, not only given the travel time but also in terms of developing new technologies on a budget and through different administrations. Each step is more complicated and often riskier than the one before it. NASA’s current blueprint for a Martian journey, which the agency unveiled in 2015, goes something like this: explorers leave Earth perched on a powerful rocket and potentially rendezvous with a crewed space station near the Moon. (NASA is studying a concept for a station called the Deep Space Gateway – a kind of launchpad for missions to the Moon, asteroids and Mars.) After a few days preparing at the gateway, the crew would board a spacecraft that shuttles people to deep space and back.
For six months or more, the crew will travel – sealed in a box the size of a small motor home, and with a view of Earth receding to a tiny pinprick of light – until they reach Mars and begin to orbit the red planet. Then, they’ll move to yet another vehicle, parked in Mars’ orbit, and plunge through the planet’s thin atmosphere, landing near pre-built habitations, possibly 3D printed out of ice or regolith. They’ll stay for 18 months, conducting geological research and maintaining the habitat, then head home.
NASA wants to send colonists to Mars in the 2030s, and so too does Mars One – a non-profit, international effort launched in Europe. Meanwhile, SpaceX, an American company, wants to one-up both and get there a decade sooner. Later this year, the private company plans to test-launch its Falcon Heavy rocket – the most powerful rocket to date. It is designed to lift into orbit a mass equal to a jetliner packed with people and food, twice the payload of the closest operational vehicle, the Delta IV Heavy. For its inaugural launch, the Falcon Heavy will shuttle a red Tesla Roadster owned by Elon Musk – the founder of SpaceX – into an orbit around the Sun that will take it near Mars. By 2020 the company wants to fling rockets to the red planet to collect samples and bring them home; after that, it wants to send people. In September 2017 Musk, who trained as a physicist, discussed a new vehicle under development with codename BFR – “Big Falcon Rocket” (though some suggest the “F” has a different meaning). The BFR sounds like it should be in a Star Wars film – it’s a self-contained, super-efficient vehicle that looks like a modified space shuttle and would travel from planet to planet nonstop.
Getting to Mars means not only using physics and engineering to conquer the technical challenges of leaving the planet but also finding efficient strategies for long-distance propulsion, designing space suits and ships that protect bodies from cosmic rays and other radiation, and landing safely on a planet with a wispy atmosphere. After all that, there’s the additional problem of getting back.
But for the scientists and engineers involved, it’s a quest worth completing. “I can’t think of anything more exciting than going out there and being among the stars,” Musk said at the International Astronautical Congress (IAC) in Adelaide, Australia, last year.
Learning from mistakes
Our current knowledge about Mars and how to get there is the result of hundreds of years of accumulated curiosity. In the 17th century Johannes Kepler mapped its elliptical orbit and Galileo Galilei was the first to study the planet through a telescope. Later, in the 19th century, astronomers studied its geological features, dust clouds, and polar ice caps. Some reported the planet had green oceans and reddish land masses, spurring speculation about alien life.
NASA’s Mariner 4 spacecraft, which flew past the planet in July 1965, was the first messenger from Earth to reach Mars. Since then, space agencies around the world – including ESA and agencies in India and the former Soviet Union – have launched a total of 44 probes, orbiters and landers to explore the planet. But it’s not easy. Of those 44, more than half (23) have broken apart, exploded, crashed or otherwise failed. Currently, two rovers remain active on the planet’s surface, and six orbiters watch from overhead, while both ESA and NASA plan to launch Mars-bound spacecraft this year. This array of missions has given physicists and engineers a good handle on the basic mechanics of getting from here to there, including setting the initial launch conditions and following the most energy-efficient path – the Hohmann transfer orbit (figure 1). The launch window is also critical: leave too early or too late, and a spaceship risks whizzing out into deep space without intersecting Mars or being captured by Mars’ gravity.
The journey between two orbits Decades before the first probe was launched into space, Walther Hohmann calculated the most energy-efficient path to transfer between two circular orbits of different size on the same plane – the Hohmann transfer orbit. For a spacecraft to begin orbiting Mars using as little energy as possible, it must not only follow this trajectory but also be launched at the correct time so Mars’ gravity will be there to catch it. This ideal launch time occurs about every 25 months.
Physically transporting anything to Mars – or anywhere in space, for that matter – requires finding a balance between minimizing mass and not leaving any essentials behind. The heavier the ship, the more fuel it will need; the more fuel it carries, the heavier the ship. NASA’s Space Launch System, a rocket system undergoing testing at the Marshall Space Flight Center in Huntsville, Alabama, is designed to be powerful enough to carry crewed missions to deep space destinations such as the Moon and Mars.
For human crews, there’s also the issue of sustaining life: the average person in a developed country uses about 100–150 litres of water every day. On the International Space Station (ISS), daily water use hovers around 11 litres per day, thanks to recycling strategies that recover water from the breath and urine of both human and animal crew members.
But the ISS also has reserve water tanks and orbits only a few hours away from Earth. A trip to Mars will take three years, which means “You have to scale up the amount you need to take to go and come back again,” says Parker. However, simply increasing the payload of an existing rocket isn’t feasible, he explains. Instead, scientists need to develop new materials and new ways to pack for such a long trip.
Shielding the humans
The biggest threat to human life on a Mars-bound mission may be the one we can’t see – radiation. “It’s certainly one of the biggest technical challenges,” says Parker. “Although the space-station astronauts are exposed to more radiation than we are on the ground, they’re substantially protected by the Van Allen belts.” These giant rings of charged particles, which originate from the interaction of the magnetic field and the solar wind, shield Earth from high-energy particles.
Going to the Moon or to Mars, however, means leaving the security of the magnetosphere behind. In deep space, radiation from sources such as solar activity and galactic cosmic rays, includes high-energy protons, photons and electrons. These particles can damage shields and penetrate many materials, including human skin. The harmful effects of space radiation accumulate over time, which means Mars-bound explorers on a three-year mission will be exposed to a radiation dose more than 100 times greater than the average dose received by Apollo astronauts, who were outside the magnetosphere for only a few days.
As Mars’ thin atmosphere doesn’t offer much protection, previous and ongoing missions to Mars have been vital in the design of materials that will keep Martian-explorers safe not only during the journey, but while they’re there. In a study published in 2014 (Science 343 1244797), scientists used data collected by the Radiation Assessment Detector (RAD) on NASA’s Curiosity rover – the first radiation analysis done on another planet – to estimate that 500 days on Mars would expose people to more than 120 mSv of radiation from the Sun and cosmic rays. That’s more than 100 times the average annual radiation exposure on Earth (figure 2). When they include radiation measurements collected during the travel to and from Mars, the researchers estimate total exposure could be as much as 1 Sv. Previous studies estimate that that much radiation would raise a 40-year-old man’s risk of dying from cancer to at least 4%, and a 40-year-old woman’s risk to 5%.
Comparing doses Astronauts travelling to Mars will no longer be protected from high-energy radiation by the Van Allen belts that surround Earth. They therefore will require impressive shielding technologies or risk increasing the likelihood of cancer. Note that the graph is on a log scale.
Before RAD, physicists had to rely on computer models to predict the radiation environment. “But as a good physicist, you need to do the experiments,” says Don Hassler of Southwest Research Institute in Boulder, Colorado. “You can’t believe your models until you’ve validated them.” As it turned out, existing models didn’t match the details of RAD’s data and had to be improved. Hassler, who led the 2014 study, says RAD’s data revealed unexpected variations in the radiation on the Martian surface, not only seasonally but also day to day. “The radiation environment is not a showstopper,” he says. “It’s like the weather. It needs to be managed.”
Real-world data will also inform the construction of a radiation-proof habitat on Mars. It would be cost-prohibitive to actually transport building materials to Mars, so a better tack would be to transport the tools instead. Indeed, NASA has sponsored a $2.5m challenge that invites people to submit designs for 3D-printed structures that could be forged from materials native to the planet, such as ice or regolith. The New York City-based architecture firm that won the first phase of the contest submitted a design of a house made of ice.
Soft landing
If you think getting off the ground and travelling across the solar system is a huge challenge, the last stage – the descent – may be the trickiest. Because of its mass, gravity on Mars is only about 38% as strong as on Earth. That seems like it would work in favour of a gentle descent, but there’s a catch – the atmosphere of Mars is 100 times thinner than Earth’s, and it’s composed mainly of carbon dioxide.
Houses of ice Martian explorers could find themselves living in 3D-printed ice habitats. (Courtesy: NASA/Team Space Exploration Architecture and Clouds Architecture Office)
“It’s too thick to ignore and too thin to slow the craft down,” says anthropologist Jack Stuster, principal scientist at Anacapa Sciences in Santa Barbara, California, who has done research with NASA on astronaut behaviour.
Indeed, engineers monitoring NASA’s Curiosity rover, which arrived at the top of Mars’ atmosphere in August 2012 after an eight-month journey, called its descent to the surface “seven minutes of hell”. The rover, with its heat shield and sky crane, was too heavy for airbags, so its fall was slowed by a combination of parachutes and retrorockets. Communications between the rover and mission control required 14 minutes to get from sender to receiver, which meant that by the time NASA received a signal that Curiosity had started descending, it had already succeeded or crashed.
Fortunately, it succeeded where others had not. Most notably, in November 1999 the Mars Climate Orbiter incinerated in the Martian atmosphere because engineers had failed to convert between imperial and metric units.
A spacecraft carrying people will be much heavier than Curiosity, though, and with more valuable cargo. Engineers have proposed a variety of solutions to slow the descent, from heat shields and retro boosters to sky cranes. Parachutes aren’t a likely solution, as they’d have to be so big or numerous that they’d add too much risk and weight to the mission. “You’d need a parachute the size of the Rose Bowl,” says Stuster, referring to the 300 m-diameter stadium in Pasadena, California.
Meanwhile, SpaceX’s Falcon 9 rockets have successfully used retrorockets to land on their launch pads multiple times, and this is the same approach the company plans to use when it goes to Mars. “In order to land on Mars…you really have to get propulsive landing perfect,” Musk said. “That’s what we’ve been practising with Falcon 9. It’s quite mesmerizing.” And where the Falcon 9 rockets have been landing using only one engine, the BFR – the new vehicle under development – will have many.
Martian prep At NASA’s Johnson Space Center, the spacesuits are tested in an 11-foot thermal vacuum chamber to simulate conditions similar to a spacecraft. (Courtesy: NASA/Rad Sinyak)
The next chapter
NASA plans a series of missions that will precede and inform the first trip to Mars. In late 2019 or early 2020, Orion will go on a test flight, during which it will orbit the Moon for a few days and release some small Cubesats before returning to Earth. After that will come a series of missions: a year-long, crewed mission to deep space; a return to the Moon; an unmanned trip to an asteroid; and finally a crewed mission to Mars’ orbit.
Beyond the physics and technical challenges lies yet another hurdle: the human element. How do you choose who should go to Mars? The mission requires a lot of time living cheek-by-jowl with a handful of other people. Stuster, who in 1996 wrote a book about the psychology of astronauts called Bold Endeavors, has worked with NASA on refining the selection process. In addition, both NASA and ESA have sponsored Earth-bound experiments designed to simulate living on Mars, in which volunteers may be sequestered away in a habitat on the side of a Hawaiian volcano (a programme called HI-SEAS), in an uncharted Italian cave (CAVES), in Antarctica, or elsewhere.
It’s tempting to view the road to Mars as one long list of difficult, fascinating questions, but Hassler thinks we’ll see the answers ticked off quickly. “I’m a firm believer that we’re going to go there,” he says, “and I think it’s going to happen sooner rather than later. It’s a matter of will, and there are more and more people out there who are determined to do it.” So maybe one day soon, films about going to Mars will be real-life documentaries – and not simply works of science fiction.
Making money from Mars
Bold ambitions – these inflatable habitats for astronauts are being developed by the US firm Bigelow Aerospace. Courtesy: NASA/Bill Ingalls
The quest to send humans to Mars might seem like a dream, but that vision is slowly turning from science fiction to science fact. Indeed, several companies are getting in on the act, working with space agencies to design devices to make deep-space exploration feasible for humans. One firm is StemRad, which makes wearable shields to protect workers from gamma radiation released in accidents at nuclear-power plants here on Earth. Based in Tel Aviv, Israel, it has joined forces with US defence and aerospace giant Lockheed Martin to adapt its wearable technology for space.
The resulting vest, dubbed “AstroRad”, is designed to protect humans from ionizing radiation, which will be one of the biggest risks on any Martian journey. To check out its technology, the company will fit the AstroRad vest to a human mannequin on Orion, which is set to blast off in December 2019 on a test trip to beyond the Moon and back. Orion forms part of NASA’s Exploration Mission 1 (EM-1), which will also include the Space Launch System (SLS) to give the capacity to hurl the vehicle into space.
Eventually Orion will transport real people, though the only other passenger on the test flight will be another mannequin, travelling without an AstroRad vest, to act as a control for the experiment. Provided by the German Space Agency, both dummies will be fitted with sensors to measure how well the company’s vest protects the mannequin from space radiation. The work is vital for future Mars trips as human cells are highly sensitive to radiation-induced mutations during cell division, when DNA unzips to undergo replication.
The biggest test for the vest will come as Orion flies through the Van Allen belts, which surround the Earth and are rich with energetic particles. StemRad’s vest, which can be customized to fit individual astronauts, has therefore been designed to protect those cells that proliferate most rapidly, including those in bone marrow. “When you protect those parts of the body, you can reduce the likelihood of cancer dramatically,” says the firm’s chief executive Oren Milstein.
Another company involved in preparations for EM-1 is Thales Alenia Space – Europe’s biggest manufacturer of satellites. It has built many components for the European Space Module (ESM), which will provide the propulsion system for Orion and, in future missions, water, electricity and other necessities to Orion astronauts. The firm has been contracted to design and build a layered shield, reinforced with Kevlar, that will protect parts of the ESM from harm due to space debris and micrometeorites. At the speeds Orion will be travelling, even a metal flake as small as a fingernail could cause serious damage without shielding.
Also cashing in on a potential trip to Mars is Lockheed Martin, which has designed or built a number of spacecraft for NASA. It announced in February that it had welded together the first two pieces of the Orion capsule to be used on the successor mission to EM-1. Known as Exploration Mission 2, it will carry real humans into deep space, with a test mission slated for launch in 2022. Meanwhile, Orbital ATK, an aerospace and defence company headquartered in Dulles, Virginia, has contributed to the design and build of many components of the SLS.
As for BWX Technologies, based in Lynchburg, Virginia, it announced last autumn that it had won an $18.8m contract from NASA to design a nuclear-propulsion reactor as part of an engine for a manned spacecraft to Mars. Using nuclear power instead of chemical fuels could cut the mass of the propulsion system and potentially shorten the travel time. That in turn – says BWX – could reduce radiation exposure.
NASA enlists many of its commercial allies through its NextStep programme, which seeks to create public-private partnerships to accelerate the pace of development. Through this project, the Sierra Nevada Corporation is currently designing components of a human habitat that could be used for long missions. It is also working on a spacecraft called Dream Chaser to carry cargo to the International Space Station.
The company was in addition selected last year – along with rival US firms Boeing, Lockheed Martin, Orbital ATK and SSL – to carry out studies on power and propulsion systems for a possible “deep space gateway”. Envisioned as a kind of “spaceport” near the Moon, it would be an important stop en route to Mars, with the power and propulsion systems providing electricity to keep the gateway running and to let it move to different orbits.
Another area of commercial interest lies in developing space-based habitats that could both sustain life and protect it from radiation. Several firms are involved here, including Bigelow Aerospace, based in Las Vegas, which has designed human habitats that can be inflated in space. Looking like giant marshmallows, these habitats might work in a variety of space settings and the firm says it wants to launch an inflatable pod to orbit the Moon by late 2021 as a deep space hotel called “Aurora Station”.
And then there’s SpaceX – the company that’s paving its own path to the red planet. Founded by the former physicist Elon Musk, its mission is to get people to Mars in the next decade. In February this year SpaceX launched its Falcon Heavy Rocket towards Mars’ orbit, carrying Musk’s very own cherry-red Tesla roadster. Some may view Musk as a publicity-seeker, but SpaceX has helped to highlight the real, commercial opportunities that Mars offers for businesses.
Permafrost in the northern hemisphere is storing around 1656 gigagrams of mercury, according to scientists who analysed permafrost cores from Alaska. That’s nearly double the amount of mercury stored in the rest of the planet’s soils, atmosphere and oceans.
“This implies permafrost regions contain roughly 10 times the total human mercury emissions over the last 30 years,” said Kevin Schaefer of the US National Snow and Ice Data Center (NSIDC). “Previous studies assumed little or no mercury in permafrost regions, but we find the opposite is true. This completely changes our view of how mercury moves through the land and ocean.”
Around 863 gigagrams of the mercury total are in the surface layer of soil that freezes and thaws each year, the team estimates, with 793 gigagrams frozen in permafrost.
To come up with the results, the researchers drilled 13 permafrost soil cores at various sites in Alaska between 2004 and 2012, measuring the total amounts of mercury and carbon in each core. They selected sites with a diverse array of soil characteristics to represent permafrost found around the entire Northern Hemisphere.
Permafrost occurs in approximately 22.79 million square km, or about 24%, of the Northern Hemisphere land surface surrounding the Arctic ocean. Climate models predict a 30 to 90% reduction in permafrost by 2100, depending on greenhouse gas emissions.
“Permafrost contains a huge amount of mercury,” said Schaefer. “We need to know how much mercury will get released from thawing permafrost, when it will get released, and where.”
Mercury, which occurs naturally in the Earth’s crust, typically enters the atmosphere through volcanic eruptions. If deposited on land it may bind with organic matter in plants. When the plants die and soil microbes decompose them, the mercury returns to the atmosphere or enters the water system. If the dead organic matter becomes frozen into permafrost, however, the mercury remains trapped.
Researchers at Duke University have developed a novel way to improve the efficacy of radiation therapy, using a phototherapeutic agent activated by Cherenkov light produced by the therapeutic photon beam. The technique – radiotherapy enhanced with Cherenkov photo-activation (RECA) – shows promise for increasing both local control and tumour immunogenicity (Int. J. Radiat. Oncol. Biol. Phys. 100 794).
RECA works by using the clinical megavoltage (MV) photon beam to induce Cherenkov light throughout the irradiated tissue, while also delivering radiation dose to the tumour. This Cherenkov light – produced when high-speed secondary electrons travel faster than the phase velocity of light within the tissue – simultaneously activates light-sensitive psoralen within the target.
Psoralen is a drug that has anti-cancer effects when photo-activated by UV radiation. Cherenkov light generated from a MV treatment beam is particularly suited for psoralen activation as its emission is most intense in the short-UV region, with a peak around 300–320 nm that corresponds to the wavelengths where most psoralen photo-activation occurs.
“Radiation therapy is often considered a local therapy, effective at controlling local disease; but relapse may occur at distant sites,” explained lead author Mark Oldham. “RECA is an exciting approach because it has twofold potential: to increase the local efficacy of radiotherapy and amplify any systemic immunogenic response, which could help control distant disease as well. This would be an important therapeutic advance.”
Cell studies
First author Paul Yoon and colleagues investigated the basic mechanisms and effects of RECA in vitro using two murine cell lines: B16 melanoma and 4T1 breast cancer cells. They examined the cytotoxicity of RECA versus radiotherapy alone, as well as the expression of MHC I, which could indicate potential for immune response.
They placed well-plates of cultured cells on a solid water slab and irradiated from below using a beam from a TrueBeam linac. While all wells received the same radiation dose, a thin light block stopped the Cherenkov light from reaching half of the wells on each plate.
The researchers first irradiated arrays of both cell lines, incubated with varying concentrations of the psoralen derivative TMP, with 2 Gy of 6 MV radiation. They then performed luminescence assays to quantify the number of viable and metabolically active cells.
Cells exposed to full RECA treatment (including Cherenkov light) showed lower viability compared with cells that were only exposed to radiation. As psoralen exposure increased (from 0–100 μM TMP), the team observed a maximum differential at around 50 μM, where RECA increased cytotoxicity by 20% and 9.5% for 4T1 and B16 cells, respectively, compared with radiation and psoralen alone. They note that at low TMP concentrations (below 10 μM), cell viability was relatively constant, indicating that Cherenkov light alone does not enhance cytotoxicity.
The researchers next performed flow cytometry to determine the effect of RECA on MHC I expression in B16 cells. They investigated pairs of wells treated with 3 or 6 Gy, with or without psoralen, and 0 Gy controls. One of each pair of wells received Cherenkov light and the other did not.
The flow cytometry data revealed a substantial increase in MHC I expression when Cherenkov light was present (about 450% and 250% at 3 and 6 Gy, respectively), compared with cells that received radiation and psoralen alone. Increasing MHC I expression has potential to increase tumour immunogenicity and may improve visibility to the immune system.
Finally, the team performed clonogenic survival assays on 4T1 cells 1–2 weeks after irradiation with or without exposure to Cherenkov light. At doses of 6 and 12 Gy, the assays revealed decreases in tumour cell viability of 7% and 36%, respectively, in RECA treated cells compared with identically treated cells with Cherenkov light blocked.
Boosting the light
To investigate whether the clinical radiation beam can be optimized to generate more Cherenkov light for the same radiation dose, the researchers irradiated a phantom with 6, 10 and 15 MV photons. The relative Cherenkov light intensity increased with increasing beam energy. The researchers also showed that adding a low-Z filter (for example, a block of polyurethane) to a flattening-filter-free 10 MV beam increased the relative Cherenkov light intensity per unit dose by 13% compared with the unfiltered beam.
The authors concluded that this work demonstrates how RECA, which is compatible with current standard-of-care radiation treatments, can increase cytotoxicity and potential immunogenicity over radiotherapy alone. Further work is required, however, to determine how these in vitro indications translate into an in vivo setting. As such, the team plans next to move on to in vivo small-animal studies.
Single photons have been emitted into topological edge states by physicists in the US. The research provides a direct link between quantum optics and topological photonics, and could prove important for a wide range of applications, including quantum communication and quantum computing.
Topological insulators are electrical insulators in the bulk, but electrons at the edge of such a material can travel without scattering in one direction that depends on their spin polarization. Similar topological behaviour has also been seen in a wide variety of other systems such as mechanical vibrations and light. The possibility of moving single photons around without scattering has attracted particular interest for potential applications in quantum-information processing. However, quantum optician Mohammad Hafezi of the Joint Quantum Institute at the University of Maryland, College Park explains that although topological photonics is an inherently quantum phenomenon, it has only ever been demonstrated with classical light.
Honeycomb symmetry
Now, Maryland researchers led by Hafezi and Edo Waks have created a heterostructure containing two adjoining periodic optical nanostructures called photonic crystals, each comprising distorted gallium arsenide honeycomb lattices. The perfectly regular lattice transmits photons of any frequency: “You can think of it basically as graphene,” says Hafezi. “Any system that has this honeycomb symmetry has this Dirac cone that does not have any band gap.” Distorting the lattice structure, however, opens up an optical band gap. In the researchers’ hereterostructure, photons around 950 nm wavelength would not propagate.
In one photonic crystal the researchers moved the triangular holes of each hexagon closer to the hexagon’s centre. In the other they moved the holes further apart. The optical band gaps created by these two distortions are such that the energy band that sits above the energy gap in one photonic crystal sits underneath it in the other, and vice versa. At the edge where the two photonic crystals meet, therefore, the two bands have to touch and cross over. This produces an edge state with an energy that lies in the middle of each photonic crystal’s band gap. Photons with this energy can therefore travel between the two photonic crystals but never scatter into the bulk. Symmetry considerations mean that photons with one circular polarization travel in one direction, whereas photons with the opposite circular polarization travel the opposite way.
Embedded quantum dots
The researchers embedded indium-arsenide quantum dots inside the heterostructure. At first they used a relatively high-power laser, exciting the quantum dots to emit broadband light. When they focused the laser on one side of the edge, they found that light in the bulk band gap travelled straight through without scattering, whereas light at other frequencies scattered into the bulk.
For their next experiment the researchers turned down their laser to 10 nW and focused it on individual quantum dots, exciting them to emit photons one at a time. To work out if photons with opposite polarizations would travel in opposite directions, the researchers applied an external magnetic field to separate the energies of the two polarization states. They found that, as predicted, photons detected at one end of the edge state had a higher energy than those detected at the other end. Finally, the researchers introduced a 60° bend into the edge: photons in the edge state tracked this perfectly without reflecting or scattering into the bulk. “Now our photons can, in principle, strongly interact with each other,” says Hafezi. “So now all these ideas of many-body physics and quantum-information processing can be considered in the context of topological photonics.”
“It’s definitely exciting,” says Peter Lodahl of the Niels Bohr Institute at the University of Copenhagen in Denmark. “It’s a major experimental step forward to demonstrate true topological phenomena in a quantum regime.” His own group first demonstrated that the direction of photons from single quantum emitters in waveguides can depend on their spin. “It’s too early to tell to what extent this topological addition gives you practical advantages,” he says, “but it’s a really exciting thing to investigate further.”
‘Big step forward’
“This [work] is a big step forward in the implementation of new optical properties in materials,” agrees Alberto Amo of the University of Lille. “The next step would be to go further with these ideas of quantum optics in topologically protected circuits and connect two or more single-photon emitters such that they can start to interact. When you have that, you can implement quantum gates and other quantum optics protocols.”
Some types of molecules could form in powerful quasar winds that would ordinarily blast molecules to bits, according to astrophysicists in the US. Their findings, based on computer simulations of the outflows, show that feedback in active galaxies is more complex than had been thought.
Quasars are powered by spiralling discs of gas around a supermassive black hole. The gas in the disc is at billions of degrees and drives powerful winds of radiation that move out into the host galaxy at hundreds of kilometres per second, sweeping up molecular gas that could otherwise be used for star formation.
Under normal circumstances, this molecular gas should be blasted into its atomic constituents by the outflow radiation. However, observations of quasars have revealed unexpected molecular gas, including carbon monoxide, hydroxyl and warm molecular hydrogen, in the outflows. Now Alexander Richings and Claude-André Faucher-Giguère, of Northwestern University, US, think they know how that molecular gas got there.
Chemical simulation
Richings wrote computer code that can model the chemical processes that occur within interstellar gas when it is accelerated by powerful quasar winds. By running simulations based on the code, he and Faucher-Giguère could confirm that any cold molecular gas swept up by the outflows would be heated and ultimately destroyed via photodissociation.
However, they also found that this was not the end of the story. Depending on several factors, including the luminosity of the quasar, the metallicity of the gas swept up in the outflows and the density of the gas, the outflows can cool and allow molecular gas to form once again.
The simulations showed that, at a distance of several hundred parsecs from the quasars (one parsec equals 3.26 light-years), the temperature in the outflows drops below a thousand degrees, allowing warm molecular hydrogen to form. When in an excited state, molecular hydrogen emits in the infrared, thereby removing thermal energy from the outflows and exacerbating their cooling, ultimately permitting other molecules and possibly even stars to form.
Feedback in doubt?
The outflows from quasars are thought to shut down star formation in galaxies by removing their star-forming material in a process called feedback. Does the realization that outflows can create molecular gas mean that the theory of feedback is in doubt?
Not at all, says Richings. When molecular gas forms in the outflow, “it is still moving outwards at a high velocity, which means that it isn’t gravitationally bound to the galaxy itself”, he says. The gas, and any stars that form, will continue to fly out into intergalactic space, leaving behind a dead, sterile galaxy.
In 2017 Helen Russell, from the University of Cambridge, led a team of astronomers to discover cold molecular gas in bubbles blown by an active galaxy at the heart of the Phoenix Cluster, 5.7bn light-years away. Although in the Phoenix Cluster’s case the bubbles are blown by narrow relativistic jets rather than radiative outflows, “molecular formation as considered by Richings is likely important for both radiative and jet-driven molecular winds,” says Russell.
Falling back in
While the radiative outflows remove gas from a galaxy entirely, the jets observed by Russell in the Phoenix Cluster move gas at lower velocities, potentially allowing that gas, and any molecules that may form in it, to fall back onto the galaxy and start forming stars again. “So, in objects like Phoenix, we see a more complex interplay of processes,” she says.
For molecules to form, Richings’ simulations indicate that the level of elements heavier than hydrogen and helium in the outflows must be at least the same level as in the Sun. In the early universe, when black holes were more likely to be active as a result of frequent collisions with other galaxies, gas was not as chemically evolved as it is in large galaxies today and therefore this would hamper the ability of the outflows to form molecular gas.
Blowing Fermi bubbles
Fortunately, we have a much more recent example of an outburst from an active supermassive black hole on our doorstep. The Fermi bubbles, discovered in 2010 by astronomers using NASA’s Fermi gamma-ray space telescope, are two large, radiation-filled cavities climbing above and below the plane of our Milky Way galaxy, and centred on the supermassive black hole at our galaxy’s heart. It is thought that perhaps eight or nine million years ago, the black hole was active for a short time, enough to produce outflows powerful enough to blow out the Fermi bubbles.
“If molecular gas did form in the Fermi bubbles, it might be possible to observe it directly, for example from carbon monoxide emission,” says Richings.
However, it is not yet certain that there will be any molecular gas to be found. Richings simulated the outflows for a duration of a million years and to a distance of just 3260 light-years from the quasar. In comparison the Fermi Bubbles are much larger and older.
“It would be interesting to repeat one of our simulations and follow the outflow for longer to see if we should expect to find molecular gas in the Fermi bubbles today,” he says.
Magnetic skyrmions are quasiparticle magnetic spin configurations with a whirling vortex-like structure that could be used as storage bits in next-generation “racetrack” memories and logic devices thanks to their small size and their robustness to external perturbations. Researchers at CNRS, Thales and the Université Paris-Saclay in France have now succeeded in nucleating these particles using current pulses and electrically detecting their presence by measuring their Hall resistivity at room temperature. The work brings real-world devices based on skyrmions a step closer to reality.
Skyrmions, small magnetic vortices that can be thought of as 2D knots in which the magnetic moments rotate about 360° within a plane, were first discovered in manganese-silicon and cobalt-iron-silicon crystals but they can exist in many materials – and notably in magnetic thin films and multilayers. They could form the basis of future magnetic data storage technologies because they can be made much smaller than the magnetic domains used in modern disk drives to store information. More importantly they can be mobile.
Being able to nucleate a specific number of skyrmions in a defined area means that the researchers can directly detect their presence by measuring their transverse resistivity using a Hall electrode placed in the area in which the skyrmions have been nucleated.
“One of the most important advances in our work is that we have been able to make electrical measurements on our sample while imaging it magnetically at the same time using magnetic force microscopy,” says Cros. “This is extremely significant and has never been done before and has allowed us to directly correlate how the electrical signal varies with nucleation/annihilation of individual skyrmions in the track.”
Towards real-world applications
This result demonstrates that electrical detection of single skyrmion is feasible and represents an important step toward real-world applications based on these quasiparticles, he tells nanotechweb.org. This detection would correspond to the reading procedure in a skyrmion-based logic device.
The team, whose work was supported by the European Union grant MAGicSky No. FET-Open-665095, will now be working on further reducing the diameter of the skyrmions to 10 nm or less, and further increasing their spin-torque-induced velocity. “As for the electrical detection, we plan to use other physical effects such as the tunnel magnetoresistance in magnetic multilayers, to increase the amplitude of the electrical signal associated with individual skyrmions,” reveals Cros.