Quantum sensors based on microscopic flaws in the crystalline structure of diamond can work at pressures as high as 140 gigapascals, according to research by physicists at the Chinese Academy of Sciences in Beijing. The finding sets a record for the operating pressure of quantum sensors based on so-called nitrogen vacancy (NV) centres, and their newfound durability could benefit studies in condensed-matter physics and geophysics.
NV centres occur when two neighbouring carbon atoms in diamond are replaced by a nitrogen atom and an empty lattice site. They act like tiny quantum magnets with different spins, and when excited with laser pulses, the fluorescent signal they emit can be used to monitor slight changes in the magnetic properties of a nearby sample of material. This is because the intensity of the emitted NV centre signal changes with the local magnetic field.
The problem is that such sensors are fragile and tend not to work in harsh conditions. This makes it difficult to use them for studying the Earth’s interior, where gigapascal (GPa) pressures prevail, or investigating materials like hydride superconductors, which are fabricated at very high pressures.
Optically detected magnetic resonance
In the new work, a team led by Gang-Qin Liu of the Beijing National Laboratory for Condensed Matter Physics and Institute of Physics, Chinese Academy of Sciences, began by creating a microscopic high-pressure chamber known as a diamond anvil cell in which to place their sensors, which consisted of microdiamonds that contain an ensemble of NV centres. Sensors of this type work thanks to a technique called optically detected magnetic resonance (ODMR) in which the sample is first excited using a laser (in this case with a wavelength of 532 nm) and then manipulated via microwave pulses. The researchers applied the microwave pulses using a thin platinum wire, which is robust to high pressures. The final step is to measure the emitted fluorescence.
“In our experiment, we first measured the photoluminescence of the NV centres under different pressures,” explains Liu. “We observed fluorescence at nearly 100 GPa, an unexpected result that led us to perform subsequent ODMR measurements.”
A large ensemble of NV centres in one spot
While the result was something of a surprise, Liu notes that the diamond lattice is very stable and undergoes no phase transition, even at pressures of 100 GPa (1Mbar, or nearly 1 million times Earth’s atmospheric pressure at sea level). And while such high pressures do modify the energy levels and optical properties of NV centres, the modification rate slows down at higher pressures, allowing the fluorescence to persist. Even so, he tells Physics World it was “no easy task” to obtain ODMR spectra at Mbar pressures.
“There are many technical challenges we have to overcome,” he says. “One in particular is that high pressures decrease the NV fluorescence signal and bring extra background fluorescence.”
The researchers overcame these problems by using a large ensemble of NV centres (~5 × 105 in one single microdiamond) and optimizing the light-collection efficiency of their experimental system. But their worries did not end there. They also needed to avoid a large pressure gradient over the sensor, as any inhomogeneity in the pressure distribution would have broadened the OMDR spectra and degraded the signal contrast.
“To meet this challenge, we chose potassium bromide (KBr) as the pressure medium and confined the detection volume to about 1 um3,” says Liu. “We were able to obtain ODMR of NV centres at nearly 140 GPa using this approach.”
The maximum pressure may be even higher, he adds, since the pressure-induced modifications of the energy levels in NV centres turned out to be smaller than expected. “The key challenge to achieve this goal is to produce high pressures with small or no pressure gradient,” Liu says. “This might be possible using noble gas as the pressure-transmitting medium.”
According to Liu and colleagues, these experiments show that NV centres could be used as in situ quantum sensors for studying the magnetic properties of materials at Mbar pressures. One example might be to probe the Meissner effect (magnetic field exclusion) in LaH10 , a high-temperature superconductor that can only be synthesized at pressures above 160 GPa.
The researchers now plan to optimize their sensors and determine their high-pressure limit. They also hope to improve their magnetic sensitivity (by optimizing the fluorescence collection efficiency) and develop multi-modal sensing schemes – for example, measuring temperature and magnetic field simultaneously.
Ince talks about his circular journey with science: from enjoying it as a child, to feeling disengaged as a young adult, to now building his entire creative output around his fascination with the natural world. In an entertaining conversation, Ince talks about the importance of critical thinking and how he longs for a society that celebrates the beauty of uncertainty.
Also in the episode, Physics World editors discuss the following books, reviewed in the latest issue of the magazine:
Hypoxia imaging White light, prompt and delayed fluorescence images of a pancreatic tumour on a mouse. The ratio of delayed to prompt fluorescence shows improved contrast, enabling accurate distinction between hypoxic and healthy tissue. (Courtesy: CC BY 4.0/J. Biomed. Opt. 10.1117/1.JBO.27.10.106005)
Surgical resection of cancerous tissue is a common treatment used to reduce the likelihood of cancer spreading to healthy tissues. However, the efficacy of such surgery strongly depends on the surgeon’s ability to distinguish between cancerous and healthy tissue.
It is known that the metabolic activities of cancerous and healthy tissue differ significantly: cancerous tissues often have chaotic blood flow combined with low levels of oxygen, or hypoxia. With hypoxic regions common in cancerous tissue, accurate identification of hypoxia could help to differentiate cancerous from healthy tissue during surgery.
When fluorescent probes are excited by light, they return to the ground state and emit light at a different energy. Immediately upon illumination, probes emit a short optical light pulse known as prompt fluorescence. Some probes can also produce a delayed fluorescence signal some time after illumination.
Although both prompt and delayed fluorescence signals decay over time, the prompt fluorescence signal decays rapidly in comparison with the prolonged decay of delayed fluorescence. The delayed fluorescence signal decay can be observed and analysed further to better understand the metabolic activity of nearby tissue.
Real-time oxygenation assessment
First author Arthur Petusseau and colleagues utilized an optical imaging system to monitor light emitted by the endogenous molecular probe protoporphyrin IX (PpIX) in a mouse model of pancreatic cancer where hypoxic regions are present.
Prize winning work Arthur Petusseau won second place at the 2022 Collegiate Inventors Competition for his hypoxia imaging research. (Courtesy: Arthur Petusseau)
The researchers administered the PpIX as either a topological ointment or via injection to the animal’s side flank and generated fluorescence using a 635 nm modulated laser diode as the excitation source. They found that the ratio of delayed to prompt fluorescence was inversely proportional to the local oxygen partial pressure in tissue.
The weak intensity of the delayed fluorescence signal makes it technically challenging to detect. To overcome this, the researchers utilized a time-gated imaging system that enables sequential monitoring of the fluorescence signal within small time windows only. This enabled them to reduce the detection of background noise and accurately monitor changes in the delayed fluorescence signal.
Further analysis showed that the delayed fluorescence signal acquired from cancerous hypoxic cells was five-fold greater than that obtained from healthy, well-oxygenated tissue. In addition, the team also found that the delayed fluorescence signal could be amplified further by tissue palpation (applying pressure to the skin during physical examination), which enhances the transient hypoxia and enables temporal contrast between the two signals.
“Because most tumours have micro regional hypoxia present, imaging hypoxia signals from PpIX delayed fluorescence allows for excellent contrast between normal tissue and tumours,” says Petusseau.
The researchers conclude that monitoring delayed fluorescence arising due to the unique emissions of the PpIX fluorescent probe in the presence of hypoxia has several benefits in distinguishing between healthy and cancerous tissue during surgery. “Acquiring both prompt and delayed fluorescence in a rapid sequential cycle allowed for imaging oxygen levels in a way that was independent of the PpIX concentration,” they say.
“The simple technology required, and the fast frame rate capability coupled with the low toxicity of PpIX make this mechanism for contrast translatable to humans. It could easily be used in the future as an intrinsic contrast mechanism for oncologic surgical guidance,” claims Petusseau.
As science and technology continues to break new boundaries, there is growing demand for cleaner and more controllable vacuum environments that can be tailored to the needs of each application. Whether for precise experiments in quantum physics or the mass manufacture of computer chips, scientists and engineers are looking for high-performance equipment that can achieve ultrahigh vacuum (UHV) or extreme high vacuum (XHV) conditions while also working within the constraints of their application.
While vacuum systems made from stainless steel continue to be the technology of choice for most processes, specialized applications that require UHV or XHV conditions can benefit from the properties offered by alternative materials such as aluminium and titanium. In research centres with particle accelerators, for example, aluminium has become popular for beamline systems because it dissipates radiation more efficiently than stainless steel. It also retains less residual magnetism, minimizing any possible influence on the strong magnetic fields used to steer the beam.
“More scientists and engineers are seeing the benefits of using aluminium and titanium for their UHV or XHV processes,” comments Tom Bogdan, vice-president for business development at ANCORP, a US-based manufacturer of vacuum chambers, valves and components. “Large-scale scientific facilities and the R&D community offer rich environments for these advanced technologies, while the commercial sector is also starting to use aluminium to improve the process conditions for high-precision manufacturing.”
Bimetal bond: The explosion bonding process forms a wave-shaped join between two metals that is capable of withstanding XHV conditions. (Courtesy: LOS Vacuum Products)
ANCORP designs and manufactures its own line of vacuum equipment, and has also established a dedicated facility for constructing customized chambers from stainless steel. Now the company has formed a partnership with LOS Vacuum Products, which specializes in fabricating vacuum hardware from aluminium and titanium, to enable its customers to exploit these high-performance materials in their UHV and XHV processes. “This is a great partnership between two companies who are focused on delivering high-performance vacuum solutions for their customers,” comments Bogdan. “LOS Vacuum will benefit from our ability to make connections with the global market, while we gain from adding their unique technology to our product portfolio.”
LOS Vacuum Products was set up in 2013 to design and build bespoke vacuum chambers for UHV and XHV applications. “Aluminium and titanium are becoming more popular to meet the growing requirements for cleaner and more precise technology development,” says Eric Jones, the company’s founder and owner. While the initial demand originated mostly from the research community, Jones reports growing interest from equipment manufacturers targeting the semiconductor sector, as well as emerging markets for medical systems and solar-cell production. “As those technologies grow the vacuum environment becomes critically important,” he says.
One key advantage of aluminium is that it is quicker and easier to machine than stainless steel, and so offers more flexibility for incorporating bespoke features into the design. Its superior thermal conductivity also allows an aluminium chamber to heat up faster and more uniformly, which speeds up the bake-out process needed to achieve UHV or XHV conditions. “Stainless steel needs to be much hotter to desorb the gas molecules and contaminants from the surface of the vacuum chamber, and that requires more energy over a longer period of time,” explains Jones. “Aluminium reduces both the cost-of-ownership and the environmental impact, which combined with its enhanced manufacturability makes it an attractive option for the semiconductor sector.”
Bespoke fabrication: This bimetal transition fitting, which combines stainless steel with aluminium has a diameter exceeding 60 cm. A range of bespoke components can be made to meet the needs of different applications. (Courtesy: LOS Vacuum Products)
Meanwhile, vacuum chambers made from titanium offer a better option for experiments in quantum physics, since their extra strength and weight provide more stability for processes that benefit from the generation of harmonics, and are also favoured for applications where it is essential to eliminate any magnetic signals. Titanium also acts as a getter for absorbing hydrogen – a common contaminant when using stainless steel in UHV or XHV environments – which enables titanium vacuum systems to support XHV conditions down to around 10–13 Torr.
Whether using aluminium or titanium, the best process conditions are achieved by using the same metal in the fixtures and fittings used to interface with the vacuum system. That includes the conflat flanges that are widely used to ensure a leak-tight seal in UHV and XHV environments, which work by pressing two hard metal faces machined with knife edges into a softer metal gasket. This causes the softer metal to flow and fill any microscopic imperfections on the hard metal faces, creating a seal that can withstand extreme temperatures and pressures down to the XHV regime.
ANCORP already manufactures conflat flanges made entirely of stainless steel, while LOS Vacuum produces all-titanium versions as well as several models that combine an aluminium body with faces made of stainless steel or titanium. “The specialist components fabricated by LOS Vacuum allow us to offer a unique solution for customers who have adopted aluminium or titanium for their vacuum systems,” comments Bogdan. “We have seen customers who have resorted to using double O-rings for their enclosures, but a metal-to-metal seal reduces outgassing and results in a better process.”
The bimetal components are fabricated using a technique called explosion bonding, a solid-state welding process that produces a strong mechanical bond just a few microns thick. An explosive charge forces the metals together at extremely high pressures, causing the atomic layers close to the two surfaces to become a plasma. As the metals collide a jet of plasma is propelled along the surfaces that scrubs them clean of any impurities, while the fluid-like behaviour of the metals creates a wave-shaped join that is strong enough to withstand UHV and XHV conditions.
ANCORP now supplies a standard line of bimetal flanges and fittings produced by LOS Vacuum, while the explosion bonding process also enables bespoke components to be fabricated from any two dissimilar metals. Two of the standard configurations combine an aluminium base with faces made from different grades of stainless steel, while another joins a titanium-faced flange to an aluminium body. This second version has the advantage of eliminating any trace magnetism and avoiding the safety hazards caused by background radiation, and can even be more cost effective than a flange faced with stainless steel. “The raw material may be more expensive but a bimetal flange made from stainless steel requires more steps to fabricate,” comments Jones. “For titanium the bonding process is less complex and less expensive.”
Jones and his team have also been able to eliminate one of the interlayer materials that are usually needed to transition from one metal to another. Their process removes the need for copper, which semiconductor manufacturers are particularly keen to avoid. “That’s now part of the standard product line,” says Jones. “It cuts the cost of materials and manufacture, and reduces the possibility of a leak pathway through the flange.”
Added options: Products available from ANCORP through the partnership with LOS Vacuum include all-titanium vacuum chambers, bimetal face-gland fittings, and bimetal flanges. (Courtesy: LOS Vacuum Products)
As part of the partnership ANCORP will also extend its existing custom fabrication capabilities to design and supply bespoke vacuum chambers made from titanium or aluminium. During the initial design phase the company works closely with its customers to understand their specific requirements and to recommend the best technology for their application. “If a customer has a particularly unusual or demanding process that would benefit from using aluminium or titanium, we’ll get Eric and his team involved to provide some specialist expertise,” says Bogdan. “As well as tailoring the design to the application, we need to ensure that the team at LOS Vacuum can manufacture the solution to the required parameters.”
Bogdan is confident that adding these specialized capabilities to ANCORP’s technology offering will help to open up new markets in the R&D sector as well as in semiconductor manufacture. “These low-outgassing solutions can deliver a real process advantage in some applications,” he says. “We want to make these options available to more customers in the international scientific community, as well as in the commercial sector.”
For Jones, meanwhile, the partnership with ANCORP offers a way to expose the company’s specialist fabrication techniques to a much larger customer base. “We’re still a small company that has focused on delivering unique solutions for specific projects, and we don’t have a lot of power for making connections with new customers,” he comments. “The partnership with ANCORP will enable us to bring our product range and technical expertise to the global market.”
“It’s probably fair to say that no single geological phenomenon has resonated in so many communities, with so many people, and in so many different ways,” says the British Geological Survey on the topic of the Anthropocene – the current geological era, during which human activity is considered to be the dominant influence on the Earth’s climate, environment and ecology. “The discovery of plate tectonics was the last time geology made a significant change to our understanding of how the Earth works. The Anthropocene is no less significant.”
Significant though the concept may be, it is still rather nebulous. Indeed, the term “the Anthropocene” has permeated through to other disciplines; graced the May 2011 cover of the Economist magazine; and even inspired the work of numerous artists. But members of the earth sciences community (specifically stratigraphers, who study the way sedimentary rocks are layered) are still debating exactly when this new geological period dominated by human activity is supposed to have begun.
Many of these changes will persist for millennia or longer, and are altering the trajectory of the Earth system, some with permanent effect
International Commission on Stratigraphy's Anthropocene Working Group
The Subcommission on Quaternary Stratigraphy (SQS) – which is a constituent body of the International Commission on Stratigraphy (ICS), the largest scientific organization in the International Union of Geological Sciences – has been developing a formal definition of this the Anthropocene. According to the SQS, there are various phenomena associated with this new epoch, including an increase in erosion and the movement of sediments associated with urbanization and agriculture. Unprecedented new materials, such as concrete, fly ash (a coal combustion product) and plastics, are now part of the sedimentary record. The impact also extends to ecosystem changes such as rampant deforestation and the biodiversity losses that have led some to assert that we are in the middle of a sixth great mass extinction.
“Many of these changes will persist for millennia or longer, and are altering the trajectory of the Earth system, some with permanent effect,” the SQS’ Anthropocene Working Group (AWG) wrote in its 2019 report of the definition of this epoch, published on the SQS website. “They are being reflected in a distinctive body of geological strata now accumulating, with potential to be preserved into the far future.”
The Anthropocene begins, etymologically speaking
The roots of the Anthropocene concept can be traced back to at least as early as 1873, and the writings of the Italian geologist Antonio Stoppani. A former Catholic priest, Stoppani regarded “the creation of man” as “the introduction of a new element into nature, of a force wholly unknown to earlier periods”. Given this, he argued, an “Anthropozoic era” had begun “with the first trace of man” and would end only when Earth had “escape[d] the hands of man…thoroughly and deeply carved by his prints”.
Four years later, American geologist Joseph Le Conte of the University of California Berkeley took this a step further, arguing that humankind had become the “chief agent of change”. Based on the notion that previous ages were “reigns of brute force and animal ferocity”, while the age of man was “characterized by the reign of mind”, he proposed the “Psychozoic era”. In a similar fashion, the Russian geochemist Vladimir Vernadsky suggested in 1926 that the biosphere had become the “Noosphere”, with nous being the Greek for “mind” and “reason”.
1 Pathway to the past An illustration of the “geological time spiral” – a representation of eons (outermost ring), periods (middle ring) and epochs (innermost ring) of planetary history. The formal geological time scale (GTS) is a representation of time based on the rock record of Earth. It is a system of chronological dating that uses chronostratigraphy (the process of relating strata to time) and geochronology (a branch of geology that aims to determine the age of rocks). It is used to describe the timing and relationships of events in geological history, from the longest scale of “eon” – several hundred millions of years. The other scales are “era” – tens to hundreds of millions of years; “period” – millions of years to tens of millions of years; “epoch” – hundreds of thousands of years to tens of millions of years; “subepoch” – thousands of years to millions of years; and “age” – thousands of years to millions of years. The current epoch of the Holocene began 11700 years ago, in 9700 BCE, with the dawn of agriculture. (Courtesy: Shutterstock/Nicolas Primola)
Around the same time, in 1922, fellow Russian Alexei Petrovich Pavlov proposed that the present geological era be dubbed the “Anthropogene”. The prefix “anthro”, meaning “human”, referred to how the era would be defined based on the emergence of the genus Homo – making it roughly equivalent to the “Quaternary”, the period covering the last 2.6 million years – while “gene” is the established suffix for a geological period. Despite being adopted in Soviet circles, and being formally approved by the Soviet Union’s Interdepartmental Stratigraphic Committee in 1963, the term never caught on elsewhere.
The first use of “Anthropocene”, meanwhile, is credited to the American limnologist Eugene Stoermer, who used the term informally in the late 1980s to refer to the impact and evidence of human activity on the planet. It would not be until the year 2000, however, that the term truly achieved scientific popularity after being re-coined by the Nobel-prize-winning atmospheric chemist Paul Crutzen, during a meeting of the International Geosphere–BioSphere Programme (IGBP) in Cuernavaca, Mexico.
According to an account by meeting organizer and chemist Will Steffen – amid a series of presentations describing various significant and geologically recent changes in Earth’s atmosphere, oceans and sediments – Crutzen became agitated by repeated references to the Holocene, the geological epoch covering the last 11,700 years. Bringing proceedings to a screeching halt, Crutzen remonstrated that they were no longer in the Holocene and on the spot, independently of Stoermer, cooked up “Anthropocene” to describe the current epoch.
A few months later, Crutzen – now in collaboration with Stoermer – wrote a short article in the IGBP newsletter expanding on the concept (Global Change Newsletter May 2000 p17). Detailing many of the ways in which humans have affected the Earth from the transformation of natural landscapes to the hole in the ozone layer, the pair wrote that “considering these and many other major and still growing impacts of human activities on Earth and atmosphere, it seems to us more than appropriate to emphasize the central role of mankind in geology and ecology by proposing to use the term ‘Anthropocene‘ for the current geological epoch”.
The Anthropocene begins, geologically speaking
The key difference between Crutzen’s conception of the Anthropocene and its predecessors is that while the latter imagined humanity’s influence as an incremental one, the former saw something more abrupt.
As science historian Jacques Grinevald, formerly of the University of Geneva, and his colleague Clive Hamilton, a public ethics expert from Australia’s Charles Sturt University, put it in a 2015 paper (The Anthropocene Review2 59), “The Anthropocene represents, according to those who initially put it forward, a dangerous shift, and a radical rupture in Earth history. This means that the Holocene can be no guide to the Anthropocene geologically or intellectually.” This is, of course, rather in keeping with the way in which previously established divisions of geological time are defined.
Physical record The “golden spike” (bronze disc in the lower section of the rock) of the Global Boundary Stratotype Section and Point (GSSP) for the Ediacaran period, located in the Flinders Ranges in South Australia. (CC BY-SA 3.0/Peter Neaum)
Geologists mostly use two approaches to divide Earth’s history. The first is geochronological, based on a combination of absolute and relative dating. Absolute dating involves analysing radioactive isotopes. Relative dating uses tools such as records of palaeomagnetism and stable isotope ratios. Geologists then divide time into spans of varying size, with eons being the largest, then eras, periods, epochs and finally “ages” as the smallest. Each span has an equivalent unit in the stratigraphic record – ages of time, for example, match up to successions of rock strata called “stages”.
The lower boundary of each stage in the rock record across the globe is established with respect to a single example in the field. Established by the ICS, these paragons are formally (and verbosely) known as the Global Boundary Stratotype Sections and Points (GSSP). However, geologists commonly refer to them as “golden spikes”, after the metal pins driven into the rocks to mark the precise location of the boundary. Most, but not all, of these boundaries are defined by changes in the fossil record.
So, for example, the start of the Sinemurian stage 199.3 million years ago is marked by the first appearance of two species of ammonite in the rock record, with the golden spike located in a rock outcrop near East Quantoxhead in Somerset, UK. The divide between the Maastrichtian and Danian stages 66 million years ago is marked by a section in El Kef, Tunisia, of the iridium layer created by the asteroid impact thought to have triggered the mass extinction that killed the dinosaurs.
Spiking up debate
So where in the geological record should the Anthropocene begin? This is the thorny question that members of the AWG have been grappling with since 2008. Indeed, as AWG convener and palaeobiologist Jan Zalasiewicz once joked, “There have been suggestions that our own work is proceeding on a geological timescale!”
In Crutzen’s original conception, he proposed that the Anthropocene might start in 1784, during the Industrial Revolution. Writing in a 2002 paper (Nature415 23), he justified this as “when analyses of air trapped in polar ice showed the beginning of growing global concentrations of carbon dioxide and methane” – a date, he noted, that coincides with the refinement of the steam engine by the Scottish engineer James Watt.
However, various alternative proposals have been made, the earliest being the advent of agriculture some 12,000 years ago. Other suggestions include 1492, when the “Columbian exchange” – referring to Italian explorer Christopher Columbus’ 1492 transatlantic voyage, and the related European colonization and global trade that followed – saw the widespread transfer of animals across the continents. Another is 1610, which saw a drop in carbon-dioxide levels associated with the arrival of imperial colonists in the Americas (European diseases having devastated local populations, leading to forest growth across abandoned settlements).
The AWG, meanwhile, has zeroed in on the “Great Acceleration” of the mid-20th century, when various socioeconomic and Earth system trends began to increase dramatically as a result of “population growth, industrialization and globalization”. According to Zalasiewicz and colleagues, the acceleration has left an array of proxy signals in the geological record, “the sharpest and most globally synchronous of [which] is made by the artificial radionuclides spread worldwide by the thermonuclear bomb tests from the early 1950s”.
2 Time question Interactions between humans and the environment, and their impact on global processes such as climate, are spread over different times and places; making it difficult to determine the onset of the Anthropocene in the global stratigraphical record. This illustration depicts the complexities involved in coming up with a formal definition of the Anthropocene as an epoch. The illustration shows both a geological timeline (top) and a historical timeline (bottom), with colour densities broadly indicating the intensity of change. The Anthropocene Working Group (referred to as the “AWG view”) has proposed the “Great Acceleration” as a start date for the Anthropocene epoch, in the mid-20th century. But geologist Philip Gibbard and colleagues suggest that it may be more practical to categorize the Anthropocene as a geological event, rather than an epoch. A geological event is defined as a temporally and spatially heterogeneous and diachronous happening in Earth history, that impacts the Earth system and the formation of geological strata. (Adapted from Journal of Quaternary Science37 395. Used with permission of John Wiley & Sons)
In 2016 the AWG presented the preliminary summary of its analysis at the 35th International Geological Congress, where a majority voted to consider the Anthropocene as “stratigraphically real”. It recommended a move to formalize the epoch with a suitable GSSP located in the mid-20th century.
At present, 12 candidate sites for the Anthropocene Golden Spike are being considered. They include coral reefs off the coast of Australia and the Gulf of Mexico; a peatland in Poland; an ice core on the Antarctic Peninsula; and a series of anthropogenic sediments in Vienna. Even once this list has been whittled down to one, the process of ratification is elaborate, and involves the proposal being approved at various stages before final evaluation by the International Union of Geological Sciences.
A turn of events?
As Zalasiewicz himself has noted, there is no guarantee that the working group’s recommendation for an Anthropocene epoch will ultimately be accepted. In fact, there are some scientists who believe that there may be better ways to conceptualize it.
Most of humanity’s impact on the environment has been diachronous, occurring at different times in different places
The leader of this charge, ironically enough, is Quaternary geologist Philip Gibbard of the University of Cambridge, UK, who was responsible for creating the AWG in the first place, joking that “I’ve only myself to blame!” In a series of papers – the latest published in the Journal of Quaternary Science (JQS37 1188) – Gibbard and his colleagues argue that while the definition of an epoch calls for a boundary that occurred at a fixed point in time across the globe, most of humanity’s impact on the environment has been diachronous, occurring at different times in different places.
Instead, they propose, it might be better to sidestep the problem of setting a single start date for the Anthropocene and instead classify it less formally as an “event”. It would run in parallel to the geological timescale and encapsulate all of humanity’s diverse effects and their complex spatial and temporal variations. In this way, they argue, the Anthropocene would be seen more like other so-called diachronous transformations in Earth’s history that occurred on different geological timescales, from place to place.
These include the Great Oxidation Event of 2.4–2.0 billion years ago, which saw oxygen accumulate in the atmosphere after the evolution of photosynthesis; or the explosion of marine life over some 25 million years during the Great Ordovician Biodiversification Event.
Epoch, event or episode?
Listening to the comments of the proponents of the epoch approach, Gibbard and colleagues’ proposal is perhaps too radical a departure from the established protocol they are following. Some have also suggested that “episode” would be a more appropriate name than “event” for such a long-lasting phenomenon – although this does not seem to negate the concept.
AWG member and stratigrapher Martin Head of Canada’s Brock University, for example, said that the group “is defining the base of the Anthropocene as a formal epoch using geological event stratigraphy. This is standard procedure when defining units of the International Geological Time Scale”. Event stratigraphy is the study of the traces of (geologically) short-lived events – lasting up to thousands of years – in the sedimentary record.
Technofossils Human-made artifacts accumulate in deposits such as this mud flat at East Tilbury on the River Thames estuary. This waste is collectively known as “technofossils” as it can be seen in stratigraphy. (CC BY-SA 4.0/Strat188)
In contrast, Head says, the Anthropocene “event” proposal is “neither strictly geological nor an event”, at least in normal Quaternary usage. “We consider it an interdisciplinary concept that combines elements of geology and the social sciences. It could coexist with a formal Anthropocene epoch if found useful, but conceptually cannot replace it and would need to be renamed to avoid confusion.”
Zalasiewicz agrees, adding “The ‘Anthropocene event’ concept is very different to that of the Anthropocene as a formal epoch of the Geological Time Scale…these are very different concepts, and it invites confusion to apply the same term of ‘Anthropocene’ to both of them. Under a different name, the ‘event’ idea could be potentially complementary to a precisely defined Anthropocene epoch, rather than in conflict with it or as an alternative to it.”
A challenge to this rationale, Gibbard and his colleagues note, is that the term Anthropocene has already taken on a life of its own. “Most of the world is not a geologist though, right? I think, in a lot of ways, the cat’s out of the bag,” says palaeoecologist Jacquelyn Gill of the University of Maine.
She adds, “What an event framework allows us to do is to characterize the Anthropocene in the way that it’s already being used by the broader public, by journalists, by historians, environmentalists, etc – all of whom sort of use the word as almost a metaphor for the suite of activities that humans do that leave a marked impact on the planet.” The issue with setting up the Anthropocene as a time period with a particular global start date, Gill argues, is that it risks trivializing the impacts that humanity had on the Earth before that point.
Not all scholars outside of geology, however, appear to share these concerns. Historian Julia Adeney Thomas of the University of Notre Dame, who has written extensively on the Anthropocene, tells Physics World that the idea of a 50,000-year event doesn’t resonate with the way the concept is seen in history, the social sciences or allied disciplines. In fact, she explains that “what works best for us is a definition of the Anthropocene shared by everyone, and rooted in biostratigraphy and Earth system science.” While us human beings have always had an impact on our environment, she adds, it is only in the last 70 years or so that we have truly transformed the way the Earth system functions.
Ultimately, defining the Anthropocene is less important than engaging with the issues it invokes. “It’s a great conceptual framework, and people are going to continue to use it, regardless of whether we say it’s 1950 or 1750, or whatever it ends up being,” notes Gill. “The reason we name and delineate things in nature is so that we have a common language to ask questions. We’re already doing that about the Anthropocene,” she says. “As it stands right now, the decision about when it starts has no bearing on the questioning that we’re doing. I just hope that this definition doesn’t limit that questioning in the future.”
A new ultrathin photovoltaic cell could be used as a power source for satellites in regions of space that experience high levels of radiation. Developed by researchers from the University of Cambridge in the UK, the device uses a thin layer of gallium arsenide (GaAs) to absorb light and is more robust to proton radiation than thicker devices studied previously.
Cosmic radiation is ionizing radiation made up of a mixture of heavy ions and cosmic rays (high-energy protons, electrons and atomic nuclei). The Earth’s magnetic field protects us from 99.9% of this radiation, and the remaining 0.1% is significantly attenuated by our atmosphere. Spacecraft receive no such protection, however, and radiation can damage or even destroy their onboard electronics.
In solar cells, radiation damage introduces defects into the photovoltaic materials that form the cell’s light-collecting layer. These defects trap the photoactivated charge carriers responsible for generating a flow of electric current across the material, reducing the current and ultimately lowering the power output of the cell.
The further the charged particles must travel through the solar cell, the more likely they are to encounter a defect and be trapped. Hence, reducing this travel distance means that a smaller fraction of the particles will become trapped by defects.
One way to do this is to make the solar cells thinner. In the new work, researchers led by Armin Barthel did exactly that, fabricating their cells from a stack of semiconducting materials with a GaAs light-absorbing layer just 80 nm thick.
To test whether this strategy worked, the researchers imitated the effects of cosmic radiation by bombarding the new cell with protons generated at the Dalton Cumbrian Nuclear Facility in the UK. They then measured the cell’s performance using a combination of time-resolved cathodoluminescence, which measures the extent of radiation damage, and a device known as a compact solar simulator that determines how well the bombarded devices convert sunlight to power.
Barthel and colleagues found that the lifetimes of charge carriers in their device decreased from around 198 picoseconds (10-12 s) pre-radiation to around 6.2 picoseconds afterwards. However, the actual current remained constant up to a certain threshold of proton fluence, beyond which it dropped sharply. The team says this drop correlates with the point at which the carrier lifetime, calculated from cathodoluminescence, becomes comparable to the time it takes for carriers to cross the ultrathin device.
Power generation in demanding space environments
“The main potential application of the devices studied in this work is for power generation in demanding space environments,” Barthel says. In a study describing the research, which is published in Journal of Applied Physics, the researchers suggest that one such environment might be medium Earth orbits, such as the Molniya orbit that passes through the centre of Earth’s proton radiation belt and is used for monitoring and communications at high latitudes. As better-protected low Earth orbits become ever more cluttered, such orbits will become more important.
The orbit of Jupiter’s moon Europa, which is of particular scientific interest in the search for extraterrestrial life, is another example. This moon has one of the most severe radiation environments in the solar system and landing a solar-powered spacecraft there will require highly radiation-tolerant cells.
Although the new cells are primarily designed as a power source for satellites, Barthel tells Physics World that he “does not rule out the idea” of using them to generate power in space for use down here on Earth. He and his colleagues now plan to use what they learnt from this study to further optimize their cells. “So far, we have only looked at one thickness for our ultrathin cells and our results will help us figure out if there is a different thickness that gives a better compromise between radiation tolerance and light absorption,” Barthel explains. “We are also interested in looking at stacking multiple ultrathin cells to improve power output and also trying different material combinations.”
This video summarizes the key messages from the 614-page report. The dream is for a new generation of world-class missions, to return complementary data – in the same way we saw with Chandra, Hubble, Spitzer and other great observatories of the recent past. There is also a push for nimbler “probe missions” with more focused science goals, and recommendations to foster greater diversity in the astronomy community and better relations with partner communities.
Find out more about the ASTRO 2020 report in this recent feature article written by science writer Keith Cooper.
An unexpected excess of light from beyond the Milky Way could be evidence of decaying dark matter particles, a trio of US-based astronomers has suggested. Drawing from observations made by NASA’s New Horizons probe, José Luis Bernal, Gabriela Sato-Polito and Marc Kamionkowski at Johns Hopkins University propose that this light could provide evidence for the decay of axions, which are hypothetical particles that could account for some of the dark matter that appears to permeate the universe.
The cosmic optical background (COB) encompasses all light originating from sources beyond the Milky Way and it should contain important information about the structure of the universe. However, astronomers have found it very difficult to disentangle the COB from light originating much closer to home – in particular, sunlight that is scattered by interplanetary dust.
Best known for its flyby of Pluto in 2015, New Horizons has since advanced deep into the outer solar system. The probe is currently in the Kuiper Belt – which contains a huge number of icy objects. Crucially for astronomers interested in the COB, interplanetary dust is far sparser in the Kuiper Belt than it is in regions closer to the Sun. Recently, the COB has been measured using New Horizon’s Long Range Reconnaissance Imager (LORRI) – which previously captured the famous high-resolution images of Pluto’s surface. Away from the influence of scattered sunlight, these new observations provide the most precise measurement of the COB to date.
Nearly twice as bright
A surprising discovery is that the COB is almost twice as bright as would be expected from the latest galaxy surveys. This excess has a statistical significance of 4σ, which means that it is unlikely that the observation is the result of a statistical fluctuation.
Now the Johns Hopkins trio suggests that this excess light could come from the decay of axions. These are hypothetical particles with tiny masses that could account for dark matter – itself a hypothetical substance that is invoked to explain puzzling observations of galaxies and larger-scale structures in the universe.
To explain LORRI’s observations, the astronomers investigated a model that has axions decaying into photons. They calculated the photon energy distribution resulting from these decays, and how they would contribute to the excess light detected by LORRI.
The team’s results suggest that excess COB photons could have originated from axions with masses ranging from 8–20 eV/c2, with average decay lifetimes of roughly 1015 years. Furthermore, if the axion decay produces photons with one specific energy, there should be evidence for this production in the spectral distribution of light in the COB.
The trio will now search for this spectral signature in future COB observations, which will improve as New Horizons advances further towards the edge of the solar system.
Lithium-ion batteries are currently pervasive across portable electronics and electric vehicles, and are on the ascent for emergent technology segments including grid storage, long-haul/heavy-duty transportation, and electric aviation. This is, however, predicated upon safe operation of lithium-ion batteries under operational extremes including extreme fast charge and untoward abuse scenarios that may lead to thermal runaway catastrophes. In this regard, it is imperative to understand the mechanistic implications of underlying thermo-electrochemical interactions at hierarchy of scales in the resulting thermal safety consequences. This presentation will provide an overview of mechanism-driven safety physics and analytics for delineating thermal stability signatures in lithium-ion battery chemistry and beyond.
Partha P Mukherjee is a professor of mechanical engineering and university faculty scholar at Purdue University. His prior appointments include assistant professor and Morris E Foster faculty fellow of mechanical engineering at Texas A&M University (2012–2017), staff scientist at Oak Ridge National Laboratory (2009–2011), director’s research fellow at Los Alamos National Laboratory (2008–2009), and engineer at Fluent India (subsidiary of Fluent Inc., currently Ansys Inc., 1999–2003). He received his PhD in mechanical engineering from The Pennsylvania State University in 2007. His awards include Scialog Fellows’ recognition for advanced energy storage, University Faculty Scholar and Faculty Excellence for Early Career Research awards from Purdue University, The Minerals, Metals & Materials Society Young Leaders Award, to name a few. His research interests are focused on mesoscale physics and stochastics of transport, chemistry and materials interactions, including an emphasis in the broad spectrum of energy storage and conversion.
Click here to learn more about Partha’s research with the Energy and Transport Sciences Laboratory (ETSL), in the Department of Mechanical Engineering at Purdue University.
Why not sign up for our other Battery Series webinars? Look out for more to be added in coming months. Even if you’re not able to join the live event, registering now enables you to access the recording as soon as it’s available.
FLASH radiotherapy – delivery of therapeutic radiation at ultrahigh dose rates – is the subject of much attention from researchers and physicians worldwide. The technique offers potential to spare healthy tissue while still effectively killing cancer cells, but many questions remain as to how the FLASH effect works, how to optimize radiation delivery, and how – and whether – to bring FLASH treatment into the clinic.
Hot on the heels of the FRPT 2022 conference in Barcelona, the Institute of Physics hosted a one-day meeting in London entitled: Ultra-high dose rate: Transforming Radiotherapy in a FLASH? Speakers at the event aimed to answer some of the above questions, and update the audience on the latest FLASH research in the UK.
What do we know?
The first speakers of the day were Bethany Rothwell from the University of Manchester and Mat Lowe from The Christie, who gave an introduction to the concept of FLASH and explained what we currently know, and don’t know, about the technique. “The big question in FLASH is why does the sparing effect happen, what’s the mechanism?” said Rothwell.
Looking at the raft of preclinical studies performed to date – which initially used electron beams, then moved onto protons and photons, and recently even included carbon and helium ions – Rothwell noted that experiments demonstrated different levels of normal tissue sparing, with dose modifying factors ranging between about 1.1 and 1.8, and no tumour modifying effects. Studies also suggest that high doses, of 10 Gy or above, are required to induce FLASH, and that oxygenation plays an important role.
Practical considerations Mat Lowe from The Christie and Bethany Rothwell from the University of Manchester. (Courtesy: Tami Freeman)
Focusing on proton-based FLASH, Lowe considered some of the practical considerations of clinical translation. “We have conditions for FLASH that we need to meet, but also have clinical requirements to meet,” he explained. He described some of the implications of requiring high dose rates and potentially having a dose threshold to meet.
For pencil-beam scanning, for example, a degrader is used to change the energy of the proton beam; but the resulting scattering and required collimation can impact the delivered dose rate. Lowe pointed out that the FAST-01 trial – the world’s first in-human FLASH clinical trial – used protons in transmission mode (where the beam passes through the patient rather than stopping at the Bragg peak). “We’ve given up some of conformality to maintain a high dose rate,” he explained.
Lowe emphasized that protons are a promising modality for delivering FLASH, as the equipment is already suitable for generating high dose rates. But careful consideration is needed as to whether current planning and delivery approaches are still appropriate. Should FLASH radiotherapy be delivered in fractions, and how many? Could we deliver beams from different directions in each fraction? “We need to build on existing clinical procedures, so we don’t lose existing advantages,” he said. “There’s lots of work to be done.”
Studies with electrons
Kristoffer Petersson told the audience about research underway at the University of Oxford. He also described some of the challenges in bringing FLASH to the clinic – including defining the specific beam parameters needed to induce FLASH and understanding the underlying radiobiological mechanisms – and emphasized the need for more preclinical data.
Towards this goal, the Oxford team is using a dedicated 6 MeV electron linear accelerator, which can deliver electron beams at dose rates from few Gy/min up to several kGy/s, to perform preclinical FLASH experiments. Petersson described some example studies performed on the system, including whole-abdomen irradiation of mice that confirmed FLASH sparing of normal intestinal tissue. Investigating the impact of various parameters on the treatment outcome revealed that while the pulse structure used to deliver FLASH could have an effect, the most important parameter is the average dose rate.
Looking further ahead, Petersson is considering a different approach. “I think that if FLASH is to have a big impact in the clinic, we need to go to megavoltage photon beams,” he said. The team’s current set-up enables FLASH with megavoltage photons, with FLASH dose rates achieved at depths from 0 to 15 mm. A new triode gun installation will enable higher and more flexible output, he noted.
Response monitoring
Other speakers at the meeting included David Fernandez-Antoran from the University of Cambridge, who described an innovative in vitro 3D culture system for analysing short- and long-term responses to FLASH treatment. Known as epithelioids, these 3D cultures can be created from various cells, including cancerous and normal mouse and human epithelial tissues, and can be maintained for year-long time periods. Fernandez-Antoran is working with the team at Manchester University to test the impact of proton FLASH irradiation on the samples.
Anna Subiel and Russell Thomas from the UK’s National Physical Laboratory told the delegates about NPL’s recent development of the world’s first portable primary standard calorimeter for absolute dosimetry of proton beams. Calorimeters benefit from being independent of dose rate and linear with dose in the ultrahigh dose rate range, making them ideally suited to measure high-dose, short-duration dose deliveries such as FLASH. Indeed, as Subiel explained, the NPL primary standard proton calorimeter was successfully used in the FLASH proton beam at Cincinnati Children’s Hospital prior to the start of the FAST-01 clinical trial.
Elise Konradsson from Lund University in Sweden spoke about the use of FLASH radiotherapy to treat pets with spontaneous cancers. “We wanted to validate FLASH in a clinically relevant set-up, so we started a collaboration to treat veterinary patients,” she explained, noting that dogs can be treated with similar radiation qualities and field sizes as humans. She pointed out the dual benefits of this approach: the patients receive advanced diagnostics and treatment, while the researchers gain useful clinical information.
Closing the translational gap Elise Konradsson from Lund University. (Courtesy: Tami Freeman)
The Lund team is using a modified linac to deliver 10 MeV electron beams at dose rates of more than 400 Gy/s. Konradsson described a dose escalation trial in canine cancer patients, using a single fraction of FLASH, which concluded that the approach was feasible and safe, with response in most patients, and a maximum tolerated dose 35 Gy.
Konradsson also described the use of surface-guided radiotherapy for motion management during FLASH treatment of canine patients. “I really think veterinary patients can help us close the translational gap,” she told the audience.
Into the clinic?
The day concluded with a debate examining whether FLASH is ready for the clinic. The first speaker, Ran Mackay from The Christie, does not think that it is. He told the audience that he had attended FRPT 2022 hoping to understand the mechanisms underlying FLASH – but actually came back with a “top 10” of potential options, ranging from free radical recombination to DNA damage, reactive oxygen species to the effect of local oxygen consumption. “So can you deliver FLASH radiotherapy with all this uncertainty about FLASH mechanisms?” he asked.
While FLASH has been prescribed for patients, including treatment of a single patient with skin cancer and the FAST-01 proton FLASH trial of bone metastases, Mackay noted that “these are fairly safe starting points”.
Mackay argued that currently, it’s not clear how to prescribe a course of effective FLASH radiotherapy and we don’t understand enough about dose rate required to induce FLASH or the key parameters to optimize in a treatment plan. With so many questions remaining, he asked whether we are ready to move to prescriptions that rely on FLASH for normal tissue sparing. “We have to be cautious in how we move forward to broader application of FLASH radiotherapy,” he said.
Another problem is the lack of relevant treatment machines, with no CE marked clinical device for delivering FLASH. “We can only deliver under an investigational device exemption granted in the US for one manufacturer’s proton machines,” said Mackay. He also pointed out that there’s also currently no way to verify FLASH delivery in vivo. “In reality, we deliver a high dose rate and hope to induce FLASH,” he explained. “But there’s nothing in FAST-01 to show evidence we delivered FLASH, we hope that FLASH is being induced, but have no evidence.”
Sharma suggested that while we may not know the exact mechanisms underlying FLASH, it may not be a necessity to fully understand this prior to early implementation. Concerns regarding risks to trial patients will be addressed by regulatory bodies, he said, pointing out that clinical trials have already received regulatory approval, and that long-term follow-up is built into these studies. He noted that more than 200 preclinical studies have been published, including peer reviewed papers in high-impact journals. None of these studies showed that FLASH may risk tumour sparing.
“So is FLASH ready for the clinic? I’d argue it is already in the clinic,” Sharma concluded. “Is it ready for CE or FDA approval? No, it’s not. But it is ready for clinical trials, the first steps have already been taken.”
And the audience agreed with Sharma, with a show of hands vote concluding that FLASH indeed is ready for the clinic. A fitting end to a highly informative day.