Disabilities are created not only by an individual’s circumstances, but also by systems and social processes designed without them in mind. That is according to a new study that calls for more proactive inclusion efforts within science, technology, engineering, mathematics and medicine (Nature Communications13 7208).
The study was carried out by a team led by Siobhán Mattison of the University of New Mexico, who says that the COVID-19 response proved that substantial changes can be implemented quickly if prioritized. What’s more,the adoption of remote working made work more accessible for some individuals. Building on these insights, Mattison’s team has drawn up an approach to inclusion comprising three strands: flexibility, accommodations and modifications.
“Flexibility” means recognizing that individuals’ needs vary widely, and therefore allowing people to work in a broad range of ways, for example by having a hybrid workforce.
“Accommodations” refers to adjustments to improve accessibility, like designing spaces with ramps, while “modifications” says that job duties should sometimes be altered where flexibility and accommodations are not enough, such as making summer teaching count towards teaching requirements.
“One of the most important things that institutions can do up front is to set aside funding to support recruitment and retention of scholars with disabilities,” Mattison told Physics World. “Conducting listening sessions is essential to evaluate needs and priorities so that money is spent in ways that are informed by people with first-hand experience of disability.”
Such activities are likely to have significant long-term benefits. After all, many people will experience disability at some point during their life – as the emergence of long COVID has brought into focus.
Co-author Logan Gin from Brown University highlights the advantages of having diverse perspectives in the workforce. “Researchers get to select the questions that are asked and answered, defining what is important for their disciplines,” he explains.
An extremely rare case of a flower that oscillates in colour over time has been discovered by researchers in Japan. The team, led by Nobumitsu Kawakubo at Gifu University, showed that the transformations are tied to cyclic changes in the flowers’ reproductive organs, which act to guide pollinating insects towards them.
Over 450 plant species are known to change the colours of their flowers. Botanists generally believe that these changes are related to the pollination conditions in each flower, signalling to insects where the best nectar can be found.
The vast majority of these colour changes are unidirectional, meaning once colour has changed, it can never change back. Yet in their study, Kawakubo’s team discovered an example of a far rarer bidirectional flower, which oscillates back and forth between two colours. Named Causonis japonica, this vine plant is native to tropical regions of Asia and Australia and is often considered to be a weed.
Fading to pink
At the start of its oscillation cycle, the researchers found that C. japonica’s flower disk is orange in colour, but soon fades to pink. After several hours of daylight, the flowers recover their orange hue, and the cycle repeats.
Through their analysis, the researchers discovered that these changes are strongly tied to levels of carotenoid pigments present in the flowers. Carotenoids are best known for giving carrots their distinctive orange hue, but are also responsible for producing red, orange, and yellow colours in organisms as diverse as pumpkins, lobsters, and bacteria surrounding volcanic hot springs.
In C. japonica, the researchers found that levels of carotenoids peaked at times when its flowers featured pollen-producing male reproductive organs. This roughly coincided with an uptick in nectar secretion in the flower, providing optimal conditions for insect pollination.
As the male organs withered and detached from the flower, its accumulated carotenoid molecules degraded, and the flower turned pink. Yet after a few hours of daylight, the flowers then developed pollen-fertilizing female reproductive organs. As carotenoids accumulated once again, the flowers turned from pink to orange, and began to secrete more nectar. Eventually, these organs wither away, the flowers turn pink, and the cycle begins again.
Having uncovered this oscillating phase changes, Kawakubo and colleagues will now aim to learn more about the biological mechanisms involved. Where C. japonica was once widely seen as a nuisance in Japan, the team hopes that their discoveries of its remarkable behaviour could spark a new appreciation of the plant. A better understanding of how colour changes take place in other flowers may also lead to better techniques for protecting threatened plant species.
Magnetic tunnel junctions (MTJs), which consist of two ferromagnets separated by a non-magnetic barrier material, are found in a host of technologies, including magnetic random-access memories in computer hard disk drives as well as magnetic sensors, logic devices and electrodes in spintronic devices. They do have a major drawback, though, which is that they do not operate well when miniaturized to below 20 nm. Researchers in China have now pushed this limit by developing a van der Waals MTJ based on a semiconducting tungsten diselenide (WSe2) spacer layer less than 10 nm thick, sandwiched between two ferromagnetic iron gallium telluride (Fe3GaTe2) electrodes. The new device also has a large tunnel magnetoresistance (TMR) at 300 K, making it suitable for memory applications.
Wang, who led the new device’s development together with Haixin Chang of the State Key Laboratory of Material Processing and Die & Mold Technology at Huazhong University of Science and Technology and the Wuhan National High Magnetic Field Center, attributes its large TMR to two features. The first is the intrinsic properties of Fe3GaTe2, which is ferromagnetic above room temperatures. “We have investigated the magnetoresistance of a number of van der Waals ferromagnet/semiconductor junctions for quite a few years in which the Curie temperature (the temperature above which a permanent magnet loses its magnetism) of the ferromagnet is far below room temperature,” he notes. “We found large magnetoresistance and efficient spin injection can only be achieved in nonlinear transport behaviour of ferromagnet/semiconductor junctions.”
In contrast to the materials Wang and colleagues investigated previously, Fe3GaTe2 (which the team discovered relatively recently) has a Curie temperature of more than 380 K. Its magnetic anisotropy is also comparable to (or even better than) that of CoFeB, a ferrimagnet widely employed in spintronics. (Unlike ferromagnets, where neighbouring magnetic moments are parallel to each other, in ferrimagnets the moments are anti-parallel but unequal in magnitude, yielding a residual spontaneous magnetism.) Importantly, Fe3GaTe2 and CoFeB both have highly polarized Fermi surfaces (the boundary between occupied and unoccupied electron energy states that defines many of properties of metals and semiconductors), which for CoFeB has meant that large spin-polarized electron sources operating at room temperature can be made from it.
A better spacer and device design
The second factor in the new device’s success, Wang says, is the high quality of the WSe2 barrier. “We discovered that using Fe3GaTe2 on its own is not enough and that we could only achieve a small room-temperature magnetoresistance (of around 0.3%) in one type of all-vdW spin-valves using a MoS2 spacer,” he explains. “We realized we needed a much better spacer and device design that allowed for highly efficient electron tunnelling.”
Wang says the team’s work confirms that very large TMRs can be achieved at room temperature in all-vdW heterostructures, which he describes as a crucial step towards 2D spintronics applications. “Beyond that, the highly efficient spin injection into semiconductors could allow us to investigate semiconductor spin physics and develop new concept semiconductor spintronic devices,” he says.
Spurred on by their results, the researchers are now busy adjusting the thickness of the spacer layer in an attempt to further increase the TMR. One promising avenue they are exploring is to use the wide-bandgap semiconductor gallium arsenide (GaSe) or the insulator hexagonal boron nitride (hBN) as a spacer material.
Lightning flashes have distinctive zig-zag shapes and physicists have long wondered why. Now, John Lowke and Endre Szili at the University of South Australia have done calculations that could explain this behaviour.
The duo created a model that describes the unusual propagation of “lightning leaders” – channels of ionized air – that connect thunderclouds to the ground. They propose that the zig-zag steps are associated with highly excited, metastable oxygen atoms – which make it far easier for electrical current to flow through the air.
Lightning appears to propagate in a series of steps that involve leaders, which are tens of metres long and originate from thunderclouds. A leader will light up for about 1 µs as current flows, creating a step. Then the channel will darken for tens of microseconds, followed by the formation of the next luminous step at the end of the previous leader – sometimes with branching occurring. This process repeats to create a familiar jagged lightning-bolt shape. A curious aspect of this process is that once a step has lit up and darkened, it does not light up again – despite being part of the conducting column.
This stepping is known to be responsible for the distinctive zig-zag patterns found in lightning streaks, but there are several unanswered questions about the physics behind this phenomenon. In particular, the nature of the dark yet conductive columns connecting leaders to thunderclouds has largely remained a mystery.
Singlet delta oxygen
In their study, Lowke and Szili calculate that the stepping behaviour could be connected to an accumulation of highly excited oxygen molecules called “singlet delta metastable oxygen”. These molecules have a radiative lifetime of roughly one hour, and cause electrons to detach from negative oxygen ions – enhancing the conductivity of the air surrounding them.
The duo suggests that time between successive steps corresponds to the time required for sufficient concentrations of the metastable molecules to accumulate at leader tips. This increases the electric field at the tip, making further ionization possible in the next step. In addition, the researchers propose that high concentrations of singlet delta oxygen should endure in earlier steps, allowing these steps to maintain their electrical conductivity, even without a sustaining electric field.
Lowke and Szili hope that a better understanding of this process could lead to new techniques and stronger regulations for protecting buildings from lightning strikes. This could minimize the economic and environmental damage caused by lightning while reducing the threat to life and limb.
Quantum sensors based on microscopic flaws in the crystalline structure of diamond can work at pressures as high as 140 gigapascals, according to research by physicists at the Chinese Academy of Sciences in Beijing. The finding sets a record for the operating pressure of quantum sensors based on so-called nitrogen vacancy (NV) centres, and their newfound durability could benefit studies in condensed-matter physics and geophysics.
NV centres occur when two neighbouring carbon atoms in diamond are replaced by a nitrogen atom and an empty lattice site. They act like tiny quantum magnets with different spins, and when excited with laser pulses, the fluorescent signal they emit can be used to monitor slight changes in the magnetic properties of a nearby sample of material. This is because the intensity of the emitted NV centre signal changes with the local magnetic field.
The problem is that such sensors are fragile and tend not to work in harsh conditions. This makes it difficult to use them for studying the Earth’s interior, where gigapascal (GPa) pressures prevail, or investigating materials like hydride superconductors, which are fabricated at very high pressures.
Optically detected magnetic resonance
In the new work, a team led by Gang-Qin Liu of the Beijing National Laboratory for Condensed Matter Physics and Institute of Physics, Chinese Academy of Sciences, began by creating a microscopic high-pressure chamber known as a diamond anvil cell in which to place their sensors, which consisted of microdiamonds that contain an ensemble of NV centres. Sensors of this type work thanks to a technique called optically detected magnetic resonance (ODMR) in which the sample is first excited using a laser (in this case with a wavelength of 532 nm) and then manipulated via microwave pulses. The researchers applied the microwave pulses using a thin platinum wire, which is robust to high pressures. The final step is to measure the emitted fluorescence.
“In our experiment, we first measured the photoluminescence of the NV centres under different pressures,” explains Liu. “We observed fluorescence at nearly 100 GPa, an unexpected result that led us to perform subsequent ODMR measurements.”
A large ensemble of NV centres in one spot
While the result was something of a surprise, Liu notes that the diamond lattice is very stable and undergoes no phase transition, even at pressures of 100 GPa (1Mbar, or nearly 1 million times Earth’s atmospheric pressure at sea level). And while such high pressures do modify the energy levels and optical properties of NV centres, the modification rate slows down at higher pressures, allowing the fluorescence to persist. Even so, he tells Physics World it was “no easy task” to obtain ODMR spectra at Mbar pressures.
“There are many technical challenges we have to overcome,” he says. “One in particular is that high pressures decrease the NV fluorescence signal and bring extra background fluorescence.”
The researchers overcame these problems by using a large ensemble of NV centres (~5 × 105 in one single microdiamond) and optimizing the light-collection efficiency of their experimental system. But their worries did not end there. They also needed to avoid a large pressure gradient over the sensor, as any inhomogeneity in the pressure distribution would have broadened the OMDR spectra and degraded the signal contrast.
“To meet this challenge, we chose potassium bromide (KBr) as the pressure medium and confined the detection volume to about 1 um3,” says Liu. “We were able to obtain ODMR of NV centres at nearly 140 GPa using this approach.”
The maximum pressure may be even higher, he adds, since the pressure-induced modifications of the energy levels in NV centres turned out to be smaller than expected. “The key challenge to achieve this goal is to produce high pressures with small or no pressure gradient,” Liu says. “This might be possible using noble gas as the pressure-transmitting medium.”
According to Liu and colleagues, these experiments show that NV centres could be used as in situ quantum sensors for studying the magnetic properties of materials at Mbar pressures. One example might be to probe the Meissner effect (magnetic field exclusion) in LaH10 , a high-temperature superconductor that can only be synthesized at pressures above 160 GPa.
The researchers now plan to optimize their sensors and determine their high-pressure limit. They also hope to improve their magnetic sensitivity (by optimizing the fluorescence collection efficiency) and develop multi-modal sensing schemes – for example, measuring temperature and magnetic field simultaneously.
Ince talks about his circular journey with science: from enjoying it as a child, to feeling disengaged as a young adult, to now building his entire creative output around his fascination with the natural world. In an entertaining conversation, Ince talks about the importance of critical thinking and how he longs for a society that celebrates the beauty of uncertainty.
Also in the episode, Physics World editors discuss the following books, reviewed in the latest issue of the magazine:
Hypoxia imaging White light, prompt and delayed fluorescence images of a pancreatic tumour on a mouse. The ratio of delayed to prompt fluorescence shows improved contrast, enabling accurate distinction between hypoxic and healthy tissue. (Courtesy: CC BY 4.0/J. Biomed. Opt. 10.1117/1.JBO.27.10.106005)
Surgical resection of cancerous tissue is a common treatment used to reduce the likelihood of cancer spreading to healthy tissues. However, the efficacy of such surgery strongly depends on the surgeon’s ability to distinguish between cancerous and healthy tissue.
It is known that the metabolic activities of cancerous and healthy tissue differ significantly: cancerous tissues often have chaotic blood flow combined with low levels of oxygen, or hypoxia. With hypoxic regions common in cancerous tissue, accurate identification of hypoxia could help to differentiate cancerous from healthy tissue during surgery.
When fluorescent probes are excited by light, they return to the ground state and emit light at a different energy. Immediately upon illumination, probes emit a short optical light pulse known as prompt fluorescence. Some probes can also produce a delayed fluorescence signal some time after illumination.
Although both prompt and delayed fluorescence signals decay over time, the prompt fluorescence signal decays rapidly in comparison with the prolonged decay of delayed fluorescence. The delayed fluorescence signal decay can be observed and analysed further to better understand the metabolic activity of nearby tissue.
Real-time oxygenation assessment
First author Arthur Petusseau and colleagues utilized an optical imaging system to monitor light emitted by the endogenous molecular probe protoporphyrin IX (PpIX) in a mouse model of pancreatic cancer where hypoxic regions are present.
Prize winning work Arthur Petusseau won second place at the 2022 Collegiate Inventors Competition for his hypoxia imaging research. (Courtesy: Arthur Petusseau)
The researchers administered the PpIX as either a topological ointment or via injection to the animal’s side flank and generated fluorescence using a 635 nm modulated laser diode as the excitation source. They found that the ratio of delayed to prompt fluorescence was inversely proportional to the local oxygen partial pressure in tissue.
The weak intensity of the delayed fluorescence signal makes it technically challenging to detect. To overcome this, the researchers utilized a time-gated imaging system that enables sequential monitoring of the fluorescence signal within small time windows only. This enabled them to reduce the detection of background noise and accurately monitor changes in the delayed fluorescence signal.
Further analysis showed that the delayed fluorescence signal acquired from cancerous hypoxic cells was five-fold greater than that obtained from healthy, well-oxygenated tissue. In addition, the team also found that the delayed fluorescence signal could be amplified further by tissue palpation (applying pressure to the skin during physical examination), which enhances the transient hypoxia and enables temporal contrast between the two signals.
“Because most tumours have micro regional hypoxia present, imaging hypoxia signals from PpIX delayed fluorescence allows for excellent contrast between normal tissue and tumours,” says Petusseau.
The researchers conclude that monitoring delayed fluorescence arising due to the unique emissions of the PpIX fluorescent probe in the presence of hypoxia has several benefits in distinguishing between healthy and cancerous tissue during surgery. “Acquiring both prompt and delayed fluorescence in a rapid sequential cycle allowed for imaging oxygen levels in a way that was independent of the PpIX concentration,” they say.
“The simple technology required, and the fast frame rate capability coupled with the low toxicity of PpIX make this mechanism for contrast translatable to humans. It could easily be used in the future as an intrinsic contrast mechanism for oncologic surgical guidance,” claims Petusseau.
As science and technology continues to break new boundaries, there is growing demand for cleaner and more controllable vacuum environments that can be tailored to the needs of each application. Whether for precise experiments in quantum physics or the mass manufacture of computer chips, scientists and engineers are looking for high-performance equipment that can achieve ultrahigh vacuum (UHV) or extreme high vacuum (XHV) conditions while also working within the constraints of their application.
While vacuum systems made from stainless steel continue to be the technology of choice for most processes, specialized applications that require UHV or XHV conditions can benefit from the properties offered by alternative materials such as aluminium and titanium. In research centres with particle accelerators, for example, aluminium has become popular for beamline systems because it dissipates radiation more efficiently than stainless steel. It also retains less residual magnetism, minimizing any possible influence on the strong magnetic fields used to steer the beam.
“More scientists and engineers are seeing the benefits of using aluminium and titanium for their UHV or XHV processes,” comments Tom Bogdan, vice-president for business development at ANCORP, a US-based manufacturer of vacuum chambers, valves and components. “Large-scale scientific facilities and the R&D community offer rich environments for these advanced technologies, while the commercial sector is also starting to use aluminium to improve the process conditions for high-precision manufacturing.”
Bimetal bond: The explosion bonding process forms a wave-shaped join between two metals that is capable of withstanding XHV conditions. (Courtesy: LOS Vacuum Products)
ANCORP designs and manufactures its own line of vacuum equipment, and has also established a dedicated facility for constructing customized chambers from stainless steel. Now the company has formed a partnership with LOS Vacuum Products, which specializes in fabricating vacuum hardware from aluminium and titanium, to enable its customers to exploit these high-performance materials in their UHV and XHV processes. “This is a great partnership between two companies who are focused on delivering high-performance vacuum solutions for their customers,” comments Bogdan. “LOS Vacuum will benefit from our ability to make connections with the global market, while we gain from adding their unique technology to our product portfolio.”
LOS Vacuum Products was set up in 2013 to design and build bespoke vacuum chambers for UHV and XHV applications. “Aluminium and titanium are becoming more popular to meet the growing requirements for cleaner and more precise technology development,” says Eric Jones, the company’s founder and owner. While the initial demand originated mostly from the research community, Jones reports growing interest from equipment manufacturers targeting the semiconductor sector, as well as emerging markets for medical systems and solar-cell production. “As those technologies grow the vacuum environment becomes critically important,” he says.
One key advantage of aluminium is that it is quicker and easier to machine than stainless steel, and so offers more flexibility for incorporating bespoke features into the design. Its superior thermal conductivity also allows an aluminium chamber to heat up faster and more uniformly, which speeds up the bake-out process needed to achieve UHV or XHV conditions. “Stainless steel needs to be much hotter to desorb the gas molecules and contaminants from the surface of the vacuum chamber, and that requires more energy over a longer period of time,” explains Jones. “Aluminium reduces both the cost-of-ownership and the environmental impact, which combined with its enhanced manufacturability makes it an attractive option for the semiconductor sector.”
Bespoke fabrication: This bimetal transition fitting, which combines stainless steel with aluminium has a diameter exceeding 60 cm. A range of bespoke components can be made to meet the needs of different applications. (Courtesy: LOS Vacuum Products)
Meanwhile, vacuum chambers made from titanium offer a better option for experiments in quantum physics, since their extra strength and weight provide more stability for processes that benefit from the generation of harmonics, and are also favoured for applications where it is essential to eliminate any magnetic signals. Titanium also acts as a getter for absorbing hydrogen – a common contaminant when using stainless steel in UHV or XHV environments – which enables titanium vacuum systems to support XHV conditions down to around 10–13 Torr.
Whether using aluminium or titanium, the best process conditions are achieved by using the same metal in the fixtures and fittings used to interface with the vacuum system. That includes the conflat flanges that are widely used to ensure a leak-tight seal in UHV and XHV environments, which work by pressing two hard metal faces machined with knife edges into a softer metal gasket. This causes the softer metal to flow and fill any microscopic imperfections on the hard metal faces, creating a seal that can withstand extreme temperatures and pressures down to the XHV regime.
ANCORP already manufactures conflat flanges made entirely of stainless steel, while LOS Vacuum produces all-titanium versions as well as several models that combine an aluminium body with faces made of stainless steel or titanium. “The specialist components fabricated by LOS Vacuum allow us to offer a unique solution for customers who have adopted aluminium or titanium for their vacuum systems,” comments Bogdan. “We have seen customers who have resorted to using double O-rings for their enclosures, but a metal-to-metal seal reduces outgassing and results in a better process.”
The bimetal components are fabricated using a technique called explosion bonding, a solid-state welding process that produces a strong mechanical bond just a few microns thick. An explosive charge forces the metals together at extremely high pressures, causing the atomic layers close to the two surfaces to become a plasma. As the metals collide a jet of plasma is propelled along the surfaces that scrubs them clean of any impurities, while the fluid-like behaviour of the metals creates a wave-shaped join that is strong enough to withstand UHV and XHV conditions.
ANCORP now supplies a standard line of bimetal flanges and fittings produced by LOS Vacuum, while the explosion bonding process also enables bespoke components to be fabricated from any two dissimilar metals. Two of the standard configurations combine an aluminium base with faces made from different grades of stainless steel, while another joins a titanium-faced flange to an aluminium body. This second version has the advantage of eliminating any trace magnetism and avoiding the safety hazards caused by background radiation, and can even be more cost effective than a flange faced with stainless steel. “The raw material may be more expensive but a bimetal flange made from stainless steel requires more steps to fabricate,” comments Jones. “For titanium the bonding process is less complex and less expensive.”
Jones and his team have also been able to eliminate one of the interlayer materials that are usually needed to transition from one metal to another. Their process removes the need for copper, which semiconductor manufacturers are particularly keen to avoid. “That’s now part of the standard product line,” says Jones. “It cuts the cost of materials and manufacture, and reduces the possibility of a leak pathway through the flange.”
Added options: Products available from ANCORP through the partnership with LOS Vacuum include all-titanium vacuum chambers, bimetal face-gland fittings, and bimetal flanges. (Courtesy: LOS Vacuum Products)
As part of the partnership ANCORP will also extend its existing custom fabrication capabilities to design and supply bespoke vacuum chambers made from titanium or aluminium. During the initial design phase the company works closely with its customers to understand their specific requirements and to recommend the best technology for their application. “If a customer has a particularly unusual or demanding process that would benefit from using aluminium or titanium, we’ll get Eric and his team involved to provide some specialist expertise,” says Bogdan. “As well as tailoring the design to the application, we need to ensure that the team at LOS Vacuum can manufacture the solution to the required parameters.”
Bogdan is confident that adding these specialized capabilities to ANCORP’s technology offering will help to open up new markets in the R&D sector as well as in semiconductor manufacture. “These low-outgassing solutions can deliver a real process advantage in some applications,” he says. “We want to make these options available to more customers in the international scientific community, as well as in the commercial sector.”
For Jones, meanwhile, the partnership with ANCORP offers a way to expose the company’s specialist fabrication techniques to a much larger customer base. “We’re still a small company that has focused on delivering unique solutions for specific projects, and we don’t have a lot of power for making connections with new customers,” he comments. “The partnership with ANCORP will enable us to bring our product range and technical expertise to the global market.”
“It’s probably fair to say that no single geological phenomenon has resonated in so many communities, with so many people, and in so many different ways,” says the British Geological Survey on the topic of the Anthropocene – the current geological era, during which human activity is considered to be the dominant influence on the Earth’s climate, environment and ecology. “The discovery of plate tectonics was the last time geology made a significant change to our understanding of how the Earth works. The Anthropocene is no less significant.”
Significant though the concept may be, it is still rather nebulous. Indeed, the term “the Anthropocene” has permeated through to other disciplines; graced the May 2011 cover of the Economist magazine; and even inspired the work of numerous artists. But members of the earth sciences community (specifically stratigraphers, who study the way sedimentary rocks are layered) are still debating exactly when this new geological period dominated by human activity is supposed to have begun.
Many of these changes will persist for millennia or longer, and are altering the trajectory of the Earth system, some with permanent effect
International Commission on Stratigraphy's Anthropocene Working Group
The Subcommission on Quaternary Stratigraphy (SQS) – which is a constituent body of the International Commission on Stratigraphy (ICS), the largest scientific organization in the International Union of Geological Sciences – has been developing a formal definition of this the Anthropocene. According to the SQS, there are various phenomena associated with this new epoch, including an increase in erosion and the movement of sediments associated with urbanization and agriculture. Unprecedented new materials, such as concrete, fly ash (a coal combustion product) and plastics, are now part of the sedimentary record. The impact also extends to ecosystem changes such as rampant deforestation and the biodiversity losses that have led some to assert that we are in the middle of a sixth great mass extinction.
“Many of these changes will persist for millennia or longer, and are altering the trajectory of the Earth system, some with permanent effect,” the SQS’ Anthropocene Working Group (AWG) wrote in its 2019 report of the definition of this epoch, published on the SQS website. “They are being reflected in a distinctive body of geological strata now accumulating, with potential to be preserved into the far future.”
The Anthropocene begins, etymologically speaking
The roots of the Anthropocene concept can be traced back to at least as early as 1873, and the writings of the Italian geologist Antonio Stoppani. A former Catholic priest, Stoppani regarded “the creation of man” as “the introduction of a new element into nature, of a force wholly unknown to earlier periods”. Given this, he argued, an “Anthropozoic era” had begun “with the first trace of man” and would end only when Earth had “escape[d] the hands of man…thoroughly and deeply carved by his prints”.
Four years later, American geologist Joseph Le Conte of the University of California Berkeley took this a step further, arguing that humankind had become the “chief agent of change”. Based on the notion that previous ages were “reigns of brute force and animal ferocity”, while the age of man was “characterized by the reign of mind”, he proposed the “Psychozoic era”. In a similar fashion, the Russian geochemist Vladimir Vernadsky suggested in 1926 that the biosphere had become the “Noosphere”, with nous being the Greek for “mind” and “reason”.
1 Pathway to the past An illustration of the “geological time spiral” – a representation of eons (outermost ring), periods (middle ring) and epochs (innermost ring) of planetary history. The formal geological time scale (GTS) is a representation of time based on the rock record of Earth. It is a system of chronological dating that uses chronostratigraphy (the process of relating strata to time) and geochronology (a branch of geology that aims to determine the age of rocks). It is used to describe the timing and relationships of events in geological history, from the longest scale of “eon” – several hundred millions of years. The other scales are “era” – tens to hundreds of millions of years; “period” – millions of years to tens of millions of years; “epoch” – hundreds of thousands of years to tens of millions of years; “subepoch” – thousands of years to millions of years; and “age” – thousands of years to millions of years. The current epoch of the Holocene began 11700 years ago, in 9700 BCE, with the dawn of agriculture. (Courtesy: Shutterstock/Nicolas Primola)
Around the same time, in 1922, fellow Russian Alexei Petrovich Pavlov proposed that the present geological era be dubbed the “Anthropogene”. The prefix “anthro”, meaning “human”, referred to how the era would be defined based on the emergence of the genus Homo – making it roughly equivalent to the “Quaternary”, the period covering the last 2.6 million years – while “gene” is the established suffix for a geological period. Despite being adopted in Soviet circles, and being formally approved by the Soviet Union’s Interdepartmental Stratigraphic Committee in 1963, the term never caught on elsewhere.
The first use of “Anthropocene”, meanwhile, is credited to the American limnologist Eugene Stoermer, who used the term informally in the late 1980s to refer to the impact and evidence of human activity on the planet. It would not be until the year 2000, however, that the term truly achieved scientific popularity after being re-coined by the Nobel-prize-winning atmospheric chemist Paul Crutzen, during a meeting of the International Geosphere–BioSphere Programme (IGBP) in Cuernavaca, Mexico.
According to an account by meeting organizer and chemist Will Steffen – amid a series of presentations describing various significant and geologically recent changes in Earth’s atmosphere, oceans and sediments – Crutzen became agitated by repeated references to the Holocene, the geological epoch covering the last 11,700 years. Bringing proceedings to a screeching halt, Crutzen remonstrated that they were no longer in the Holocene and on the spot, independently of Stoermer, cooked up “Anthropocene” to describe the current epoch.
A few months later, Crutzen – now in collaboration with Stoermer – wrote a short article in the IGBP newsletter expanding on the concept (Global Change Newsletter May 2000 p17). Detailing many of the ways in which humans have affected the Earth from the transformation of natural landscapes to the hole in the ozone layer, the pair wrote that “considering these and many other major and still growing impacts of human activities on Earth and atmosphere, it seems to us more than appropriate to emphasize the central role of mankind in geology and ecology by proposing to use the term ‘Anthropocene‘ for the current geological epoch”.
The Anthropocene begins, geologically speaking
The key difference between Crutzen’s conception of the Anthropocene and its predecessors is that while the latter imagined humanity’s influence as an incremental one, the former saw something more abrupt.
As science historian Jacques Grinevald, formerly of the University of Geneva, and his colleague Clive Hamilton, a public ethics expert from Australia’s Charles Sturt University, put it in a 2015 paper (The Anthropocene Review2 59), “The Anthropocene represents, according to those who initially put it forward, a dangerous shift, and a radical rupture in Earth history. This means that the Holocene can be no guide to the Anthropocene geologically or intellectually.” This is, of course, rather in keeping with the way in which previously established divisions of geological time are defined.
Physical record The “golden spike” (bronze disc in the lower section of the rock) of the Global Boundary Stratotype Section and Point (GSSP) for the Ediacaran period, located in the Flinders Ranges in South Australia. (CC BY-SA 3.0/Peter Neaum)
Geologists mostly use two approaches to divide Earth’s history. The first is geochronological, based on a combination of absolute and relative dating. Absolute dating involves analysing radioactive isotopes. Relative dating uses tools such as records of palaeomagnetism and stable isotope ratios. Geologists then divide time into spans of varying size, with eons being the largest, then eras, periods, epochs and finally “ages” as the smallest. Each span has an equivalent unit in the stratigraphic record – ages of time, for example, match up to successions of rock strata called “stages”.
The lower boundary of each stage in the rock record across the globe is established with respect to a single example in the field. Established by the ICS, these paragons are formally (and verbosely) known as the Global Boundary Stratotype Sections and Points (GSSP). However, geologists commonly refer to them as “golden spikes”, after the metal pins driven into the rocks to mark the precise location of the boundary. Most, but not all, of these boundaries are defined by changes in the fossil record.
So, for example, the start of the Sinemurian stage 199.3 million years ago is marked by the first appearance of two species of ammonite in the rock record, with the golden spike located in a rock outcrop near East Quantoxhead in Somerset, UK. The divide between the Maastrichtian and Danian stages 66 million years ago is marked by a section in El Kef, Tunisia, of the iridium layer created by the asteroid impact thought to have triggered the mass extinction that killed the dinosaurs.
Spiking up debate
So where in the geological record should the Anthropocene begin? This is the thorny question that members of the AWG have been grappling with since 2008. Indeed, as AWG convener and palaeobiologist Jan Zalasiewicz once joked, “There have been suggestions that our own work is proceeding on a geological timescale!”
In Crutzen’s original conception, he proposed that the Anthropocene might start in 1784, during the Industrial Revolution. Writing in a 2002 paper (Nature415 23), he justified this as “when analyses of air trapped in polar ice showed the beginning of growing global concentrations of carbon dioxide and methane” – a date, he noted, that coincides with the refinement of the steam engine by the Scottish engineer James Watt.
However, various alternative proposals have been made, the earliest being the advent of agriculture some 12,000 years ago. Other suggestions include 1492, when the “Columbian exchange” – referring to Italian explorer Christopher Columbus’ 1492 transatlantic voyage, and the related European colonization and global trade that followed – saw the widespread transfer of animals across the continents. Another is 1610, which saw a drop in carbon-dioxide levels associated with the arrival of imperial colonists in the Americas (European diseases having devastated local populations, leading to forest growth across abandoned settlements).
The AWG, meanwhile, has zeroed in on the “Great Acceleration” of the mid-20th century, when various socioeconomic and Earth system trends began to increase dramatically as a result of “population growth, industrialization and globalization”. According to Zalasiewicz and colleagues, the acceleration has left an array of proxy signals in the geological record, “the sharpest and most globally synchronous of [which] is made by the artificial radionuclides spread worldwide by the thermonuclear bomb tests from the early 1950s”.
2 Time question Interactions between humans and the environment, and their impact on global processes such as climate, are spread over different times and places; making it difficult to determine the onset of the Anthropocene in the global stratigraphical record. This illustration depicts the complexities involved in coming up with a formal definition of the Anthropocene as an epoch. The illustration shows both a geological timeline (top) and a historical timeline (bottom), with colour densities broadly indicating the intensity of change. The Anthropocene Working Group (referred to as the “AWG view”) has proposed the “Great Acceleration” as a start date for the Anthropocene epoch, in the mid-20th century. But geologist Philip Gibbard and colleagues suggest that it may be more practical to categorize the Anthropocene as a geological event, rather than an epoch. A geological event is defined as a temporally and spatially heterogeneous and diachronous happening in Earth history, that impacts the Earth system and the formation of geological strata. (Adapted from Journal of Quaternary Science37 395. Used with permission of John Wiley & Sons)
In 2016 the AWG presented the preliminary summary of its analysis at the 35th International Geological Congress, where a majority voted to consider the Anthropocene as “stratigraphically real”. It recommended a move to formalize the epoch with a suitable GSSP located in the mid-20th century.
At present, 12 candidate sites for the Anthropocene Golden Spike are being considered. They include coral reefs off the coast of Australia and the Gulf of Mexico; a peatland in Poland; an ice core on the Antarctic Peninsula; and a series of anthropogenic sediments in Vienna. Even once this list has been whittled down to one, the process of ratification is elaborate, and involves the proposal being approved at various stages before final evaluation by the International Union of Geological Sciences.
A turn of events?
As Zalasiewicz himself has noted, there is no guarantee that the working group’s recommendation for an Anthropocene epoch will ultimately be accepted. In fact, there are some scientists who believe that there may be better ways to conceptualize it.
Most of humanity’s impact on the environment has been diachronous, occurring at different times in different places
The leader of this charge, ironically enough, is Quaternary geologist Philip Gibbard of the University of Cambridge, UK, who was responsible for creating the AWG in the first place, joking that “I’ve only myself to blame!” In a series of papers – the latest published in the Journal of Quaternary Science (JQS37 1188) – Gibbard and his colleagues argue that while the definition of an epoch calls for a boundary that occurred at a fixed point in time across the globe, most of humanity’s impact on the environment has been diachronous, occurring at different times in different places.
Instead, they propose, it might be better to sidestep the problem of setting a single start date for the Anthropocene and instead classify it less formally as an “event”. It would run in parallel to the geological timescale and encapsulate all of humanity’s diverse effects and their complex spatial and temporal variations. In this way, they argue, the Anthropocene would be seen more like other so-called diachronous transformations in Earth’s history that occurred on different geological timescales, from place to place.
These include the Great Oxidation Event of 2.4–2.0 billion years ago, which saw oxygen accumulate in the atmosphere after the evolution of photosynthesis; or the explosion of marine life over some 25 million years during the Great Ordovician Biodiversification Event.
Epoch, event or episode?
Listening to the comments of the proponents of the epoch approach, Gibbard and colleagues’ proposal is perhaps too radical a departure from the established protocol they are following. Some have also suggested that “episode” would be a more appropriate name than “event” for such a long-lasting phenomenon – although this does not seem to negate the concept.
AWG member and stratigrapher Martin Head of Canada’s Brock University, for example, said that the group “is defining the base of the Anthropocene as a formal epoch using geological event stratigraphy. This is standard procedure when defining units of the International Geological Time Scale”. Event stratigraphy is the study of the traces of (geologically) short-lived events – lasting up to thousands of years – in the sedimentary record.
Technofossils Human-made artifacts accumulate in deposits such as this mud flat at East Tilbury on the River Thames estuary. This waste is collectively known as “technofossils” as it can be seen in stratigraphy. (CC BY-SA 4.0/Strat188)
In contrast, Head says, the Anthropocene “event” proposal is “neither strictly geological nor an event”, at least in normal Quaternary usage. “We consider it an interdisciplinary concept that combines elements of geology and the social sciences. It could coexist with a formal Anthropocene epoch if found useful, but conceptually cannot replace it and would need to be renamed to avoid confusion.”
Zalasiewicz agrees, adding “The ‘Anthropocene event’ concept is very different to that of the Anthropocene as a formal epoch of the Geological Time Scale…these are very different concepts, and it invites confusion to apply the same term of ‘Anthropocene’ to both of them. Under a different name, the ‘event’ idea could be potentially complementary to a precisely defined Anthropocene epoch, rather than in conflict with it or as an alternative to it.”
A challenge to this rationale, Gibbard and his colleagues note, is that the term Anthropocene has already taken on a life of its own. “Most of the world is not a geologist though, right? I think, in a lot of ways, the cat’s out of the bag,” says palaeoecologist Jacquelyn Gill of the University of Maine.
She adds, “What an event framework allows us to do is to characterize the Anthropocene in the way that it’s already being used by the broader public, by journalists, by historians, environmentalists, etc – all of whom sort of use the word as almost a metaphor for the suite of activities that humans do that leave a marked impact on the planet.” The issue with setting up the Anthropocene as a time period with a particular global start date, Gill argues, is that it risks trivializing the impacts that humanity had on the Earth before that point.
Not all scholars outside of geology, however, appear to share these concerns. Historian Julia Adeney Thomas of the University of Notre Dame, who has written extensively on the Anthropocene, tells Physics World that the idea of a 50,000-year event doesn’t resonate with the way the concept is seen in history, the social sciences or allied disciplines. In fact, she explains that “what works best for us is a definition of the Anthropocene shared by everyone, and rooted in biostratigraphy and Earth system science.” While us human beings have always had an impact on our environment, she adds, it is only in the last 70 years or so that we have truly transformed the way the Earth system functions.
Ultimately, defining the Anthropocene is less important than engaging with the issues it invokes. “It’s a great conceptual framework, and people are going to continue to use it, regardless of whether we say it’s 1950 or 1750, or whatever it ends up being,” notes Gill. “The reason we name and delineate things in nature is so that we have a common language to ask questions. We’re already doing that about the Anthropocene,” she says. “As it stands right now, the decision about when it starts has no bearing on the questioning that we’re doing. I just hope that this definition doesn’t limit that questioning in the future.”
A new ultrathin photovoltaic cell could be used as a power source for satellites in regions of space that experience high levels of radiation. Developed by researchers from the University of Cambridge in the UK, the device uses a thin layer of gallium arsenide (GaAs) to absorb light and is more robust to proton radiation than thicker devices studied previously.
Cosmic radiation is ionizing radiation made up of a mixture of heavy ions and cosmic rays (high-energy protons, electrons and atomic nuclei). The Earth’s magnetic field protects us from 99.9% of this radiation, and the remaining 0.1% is significantly attenuated by our atmosphere. Spacecraft receive no such protection, however, and radiation can damage or even destroy their onboard electronics.
In solar cells, radiation damage introduces defects into the photovoltaic materials that form the cell’s light-collecting layer. These defects trap the photoactivated charge carriers responsible for generating a flow of electric current across the material, reducing the current and ultimately lowering the power output of the cell.
The further the charged particles must travel through the solar cell, the more likely they are to encounter a defect and be trapped. Hence, reducing this travel distance means that a smaller fraction of the particles will become trapped by defects.
One way to do this is to make the solar cells thinner. In the new work, researchers led by Armin Barthel did exactly that, fabricating their cells from a stack of semiconducting materials with a GaAs light-absorbing layer just 80 nm thick.
To test whether this strategy worked, the researchers imitated the effects of cosmic radiation by bombarding the new cell with protons generated at the Dalton Cumbrian Nuclear Facility in the UK. They then measured the cell’s performance using a combination of time-resolved cathodoluminescence, which measures the extent of radiation damage, and a device known as a compact solar simulator that determines how well the bombarded devices convert sunlight to power.
Barthel and colleagues found that the lifetimes of charge carriers in their device decreased from around 198 picoseconds (10-12 s) pre-radiation to around 6.2 picoseconds afterwards. However, the actual current remained constant up to a certain threshold of proton fluence, beyond which it dropped sharply. The team says this drop correlates with the point at which the carrier lifetime, calculated from cathodoluminescence, becomes comparable to the time it takes for carriers to cross the ultrathin device.
Power generation in demanding space environments
“The main potential application of the devices studied in this work is for power generation in demanding space environments,” Barthel says. In a study describing the research, which is published in Journal of Applied Physics, the researchers suggest that one such environment might be medium Earth orbits, such as the Molniya orbit that passes through the centre of Earth’s proton radiation belt and is used for monitoring and communications at high latitudes. As better-protected low Earth orbits become ever more cluttered, such orbits will become more important.
The orbit of Jupiter’s moon Europa, which is of particular scientific interest in the search for extraterrestrial life, is another example. This moon has one of the most severe radiation environments in the solar system and landing a solar-powered spacecraft there will require highly radiation-tolerant cells.
Although the new cells are primarily designed as a power source for satellites, Barthel tells Physics World that he “does not rule out the idea” of using them to generate power in space for use down here on Earth. He and his colleagues now plan to use what they learnt from this study to further optimize their cells. “So far, we have only looked at one thickness for our ultrathin cells and our results will help us figure out if there is a different thickness that gives a better compromise between radiation tolerance and light absorption,” Barthel explains. “We are also interested in looking at stacking multiple ultrathin cells to improve power output and also trying different material combinations.”