Skip to main content

High-speed camera probes the peeling behaviour of sticky tape

Researchers in France have made new discoveries about the way in which sticky tape peels away from surfaces. Stéphane Santucci and colleagues at the École Normale Supérieure in Lyon used observations from a high-speed camera to explore the complex physics involved in the peeling process on microscopic scales. Their observations have allowed them to identify strict mathematical relationships underlying the process, but questions still remain over the causes of some of its observed properties.

Peeling tape away from a surface is a familiar, yet often frustrating experience; while it remains firmly stuck to the surface at some points, it can peel away too quickly at others. The physics underlying this behaviour has been poorly understood until recently, but studies in 2010 uncovered a characteristic pattern in which peeling repeatedly stops and starts on scales of millimetres. Santucci and colleagues explored the scenario in further detail in 2015, revealing that this macroscopic stick–slip behaviour arises from energy being released close to the front separating the stuck and peeled tape.

Now, Santucci’s team have studied the process in unprecedented detail using a high-speed camera mounted to a microscope, together with an electric motor that precisely controls the velocity and angle of the peel. Their measurements revealed that when tape is stuck for longer periods of time, the subsequent slip would cover larger distances, with the two quantities following a strict cube-root relationship. Slip distances also increased linearly with the peeling angle and the stiffness of the tape.

Santucci and colleagues propose that the behaviour arises because both the tape’s adhesive, and the point at which it bends, build up elastic potential energy during sticking. Over the course of a slip, this potential is subsequently released at the tape’s separation front in the form of kinetic energy. From these insights, the researchers constructed a theoretical model incorporating the cube-root relationship underlying this energy budget. Their simulations accurately predicted the peeling behaviours of tapes with a variety of properties.

Intriguingly, the team’s models were able to recreate the waves that propagate across the separation front, perpendicular to the tape, at speeds of up to 900 m/s during a slip. The cause of this behaviour has yet to be explained, but the researchers compare it with the developments of cracks in solid materials – since in both scenarios new surfaces are created along propagating lines. In future work, Santucci’s team hope to use their simulations to learn more about this mysterious behaviour.

Full details are reported in Physical Review Letters.

Plasmonics technologies take on global challenges

Naomi Halas from Rice University

In 1861 James Clerk Maxwell rewrote our understanding of light when he began publishing a description of it in terms of electric and magnetic fields leapfrogging each other as they propagate through space. The resulting “Maxwell equations” arguably rank among some of the most elegant equations in physics – something that it took me a long time to appreciate as I yawned through electromagnetism undergraduate courses wondering whether the topic could ever get interesting. Nonetheless I went on to spend a further three years crunching through Maxwell equations to calculate electric field enhancements around silver nanoparticles while studying “plasmonics” under the guidance of  David Richards, professor of physics and vice-dean at King’s College London. At that point I was forced to concede that electromagnetism had genuinely gotten interesting. So it was with more than a little excitement earlier this week, that I walked past the lecture theatres and rooms where Maxwell had researched and taught at my old stomping ground, King’s College London, on my way to hear Naomi Halas one of the pioneers of the field talk at an event organised by Anatoly Zayats, now chair of experimental physics and head of the Photonics & Nanotechnology Group at King’s College London (and my former PhD examiner).

In her own words Halas was working in plasmonics before it even had a name. Hailing from Rice University in the US, she is the Stanley C Moore Professor in electrical and computer engineering, professor of biomedical engineering, chemistry, physics and astronomy, and director of the Laboratory for Nanophotonics, Nanoengineering, Plasmonics, and Nanophotonics. While she laughs off my epithet of “plasmonics pioneer” as we talk over coffee before the lecture, Zayats is quick to add, “Naomi is not only a pioneer in plasmonics but brought the field to real applications.” Her lecture highlights just how far these applications have now reached.

Halas starts by taking us back to a time that even predates Maxwell, to the person Maxwell appealed to for his reference for an appointment at King’s, Michael Faraday. (Zayats has shown Halas and I this very letter, now on display in the physics department.) In “From Faraday to tomorrow” Halas describes how Faraday studied flasks of liquids containing nanoparticles, dye-less solutions with vibrant hues later explained in the theories of  Gustav Mie. The colours arise from collective quantized oscillations of electrons in nanoscale structures in response to light, a behaviour of the so-called electron plasma that eventually acquired the name “plasmons”. On resonance the scattering of light by a plasmonic nanoparticle shapes an incident light field in a way that greatly outsizes the particle itself, and this attribute together with the ability to tune the resonance with nanoparticle size, composition and shape has sparked a wide range of new technologies.

Already in 1951 Arthur Aden and Milton Kerker had proposed the theory that more complex core-shell structures could provide additional “control knobs” for tuning these resonances. But theory is one thing, and when Halas began her career in plasmonics in 1990, nanotechnology was practically synonymous with sci-fi. “Our starting point in the field was working out how to create these particles,” she tells attendees. However as nanosynthesis approaches advanced, ideas for manipulating light with nanoparticles really began to unfurl.

But what can you do with it?

As Halas explains, researchers were quick to see potential in using plasmonic structures to manipulate light for new types of lenses and for sensing chemicals through resonance shifts. However, one of the first characteristics that caught her attention was the potential to excite resonant responses in the near infrared, a wavelength at which people’s bodies are transparent. Halas and her colleagues showed that since light at these wavelengths can pass through human tissues, it can excite a nanoparticle at a cancer tumour so that it heats up and destroys the cancerous cells. This photothermal cancer therapy is now already used in clinics, and thanks to developments in imaging there has been great success in using it to treat prostate cancer.

Halas goes on to explain that the same highly localized electromagnetic field enhancements that can kill a tumour, can be a powerful tool for tackling environmental resource issues as well. A billion people around the world lack access to clean water, but technology based on plasmonic nanoparticles can heat water to purify it without the high energy consumption that makes traditional reverse osmosis purification plants so expensive. Plasmons decay by releasing “hot” electrons that have a lot of energy, which can catalyse chemical reactions to occur at greatly reduced temperatures and pressures – again saving energy. Production of ammonia alone – a staple chemical widely used in agriculture – is responsible for 5% of the world’s energy consumption, so these nanoparticles have potential to make a huge difference. Halas also suggests that photoassisted catalysis could provide solutions to hydrogen production at sufficiently affordable costs for a more environmentally friendly hydrogen energy economy to finally take off. And developments to exploit plasmon resonances in “commoner” metals like aluminium – as opposed to the gold and silver predominantly used so far – could make these technologies more sustainable still.

It’s easy to see what is attractive about plasmonics research now, but what was it that first motivated researchers to pursue the field when all there was to go on was Faraday’s flasks of attractively coloured liquids? Zayats, whose research over the years has also shaped the field, points to the unique properties of plasmonic systems, their very strong field confinement and light concentration. “Then you start thinking ‘where can I apply this where can it be an advantage?’ – and this drives the research forward.”

Plasmonics developments now lie very much at the interface of several disciplines from quantum chemistry for precision production of nanoparticles with the right properties, physics to understand their behaviour, biology, medicine, electronics and catalysis to understand what this might mean for a particular application, and of course reams of regulatory and business knowhow if any of these technologies are to get to the market place. “It is inherently very multidisciplinary,” says Halas. “In every field you have the challenge of getting people from very different backgrounds, educations and orientations to learn to work together – I think that is the big science challenge, a big science human challenge.”

Melting polar ice sheets will alter weather

The global weather is about to get worse. The melting polar ice sheets will mean rainfall and windstorms could become more violent, and hot spells and ice storms could become more extreme.

This is because the ice sheets of Greenland and Antarctica are melting, to affect what were once stable ocean currents and airflow patterns around the globe.

Planetary surface temperatures could rise by 3°C or even 4°C by the end of the century. Global sea levels will rise in ways that would “enhance global temperature variability”, but this might not be as high as earlier studies have predicted. That is because the ice cliffs of Antarctica might not be so much at risk of disastrous collapse that would set the glaciers accelerating to the sea.

The latest revision of evidence from the melting ice sheets in two hemispheres – and there is plenty of evidence that melting is happening at ever greater rates – is based on two studies of what could happen to the world’s greatest reservoirs of frozen freshwater if nations pursue current policies, fossil fuel combustion continues to increase, and global average temperatures creep up to unprecedented levels.

“Under current global government policies, we are heading towards 3 or 4 degrees of warming above pre-industrial levels, causing a significant amount of melt water from the Greenland and Antarctic ice sheets to enter Earth’s oceans. According to our models, this melt water will cause significant disruptions to ocean currents and change levels of warming around the world,” said Nick Golledge, a south polar researcher at Victoria University, in New Zealand.

He and colleagues from Canada, the US, Germany and the UK report in Nature that they matched satellite observations of what is happening to the ice sheets with detailed simulations of the complex effects of melting over time, and according to the human response so far to warnings of climate change.

In Paris in 2015, leaders from 195 nations vowed to contain global warming to “well below” an average rise of 2°C by 2100. But promises have yet to become concerted and coherent action, and researchers warn that on present policies, a 3°C rise seems inevitable.

Sea levels have already risen by about 14 cm in the last century: the worst scenarios have proposed a devastating rise of 130 cms by 2100. The fastest increase in the rise of sea levels is likely to happen between 2065 and 2075.

Gulf Stream weakens

As warmer melt water gets into the North Atlantic, that major ocean current the Gulf Stream is likely to be weakened. Air temperatures are likely to rise over eastern Canada, central America and the high Arctic. Northwestern Europe – scientists have been warning of this for years – will become cooler.

In the Antarctic, a lens of warm fresh water will form over the surface, allowing uprising warm ocean water to spread and cause what could be further Antarctic melting.

But how bad this could be is re-examined in a second, companion paper in NatureTamsin Edwards, now at King’s College London, Dr Golledge and others took a fresh look at an old scare: that the vast cliffs of ice – some of them 100 metres above sea level – around the Antarctic could become unstable and collapse, accelerating the retreat of the ice behind them.

They used geophysical techniques to analyse dramatic episodes of ice loss that must have happened 3 million years ago and 125,000 years ago, and they went back to the present patterns of melt. These losses, in their calculations, did not cause unstoppable ice loss in the past, and may not affect the future much either.

Instability less important

“We’ve shown that ice-cliff instability doesn’t appear to be an essential mechanism in reproducing past sea level changes and so this suggests ‘the jury’s still out’ when it comes to including it in future predictions,” said Dr Edwards.

“Even if we do include ice-cliff instability, our more thorough assessment shows the most likely contribution to sea level rise would be less than half a metre by 2100.”

At worst, there is a one in 20 chance that enough of Antarctica’s glacial burden will melt to raise sea levels by 39 cm. More likely, both studies conclude, under high levels of greenhouse gas concentrations, south polar ice will only melt to raise sea levels worldwide by about 15 cm.

Gravitational waves could resolve Hubble constant debate

Simulations by an international team of researchers have shown that new measurements of gravitational waves could finally resolve the discrepancy in Hubble’s constant reported using different measurement techniques. Accumulating gravitational-wave signals from the mergers of 50 binary neutron stars, the scientists found, will yield the most accurate value of the constant to date – which would not only settle the debate but also confirm whether there are issues with the current standard cosmological model.

The Hubble constant represents the rate at which the universe is currently expanding and is vital for calculating both its age and its size. The constant is also widely used in astronomy to help determine the masses and luminosities of stars, the size scales of galaxy clusters, and much more besides. However, two different techniques for estimating the value of Hubble’s constant have yielded very different results

To measure Hubble’s constant directly, scientists need to know a galaxy’s outward radial velocity and its distance from the Earth. The first of these measurements can be obtained from the galaxy’s spectroscopic redshift, but the distance to the galaxy is more difficult to determine directly.

A common way of estimating distance is to exploit so-called “standard candles” – Cepheid variable stars or type 1a supernovae that have known absolute luminosities. In 2016 the best estimate for the Hubble constant obtained this way was 73.2 km s–1 Mpc–1 – vastly different from the value of 67.8 km s–1 Mpc–1 obtained in the same year  by studying the radiation of the Cosmic Microwave Background (CMB).The discrepancy is yet to be explained, since the values should agree if the standard cosmological model is correct.

In this new study, researchers from Europe and the US attempted to reconcile these two results. The scientists exploited the concept of “posterior predictive distribution” (PPD), a methodology often used to determine the reproducibility of experimental results. PPD relies on a dynamic view of probability – in other words, one that changes as new information is obtained.

In this case the scientists implemented PPD to simulate measurements of the Hubble constant using these two different methods, and to check their consistency with the standard cosmological model. One interesting finding is that there’s at least a 6% chance that the current discrepancy in the Hubble constant is purely due to random error.

They then simulated how new independent data could help resolve the debate. Gravitational waves from merging neutron stars seemed a promising avenue to explore, since their signal yields constraints on the distance to the binary stars. Measurements of gravitational waves should therefore provide an estimate of the Hubble constant without making any assumptions about the cosmology of the Universe.

The researchers found that 50 detections of gravitational-wave signals from merging neutron stars would be needed to properly arbitrate between the two different values for the Hubble constant. Including such a dataset within their PPD simulations would, they claim, yield the most accurate value of the Hubble’s constant yet measured – with an error of below 1.8%. Judging by current progress, observations of those 50 neutron-star mergers could well be achieved within the next decade.

Full results are published in Physical Review Letters.

Tissue repair scaffold allies with immunotherapy

UBM decreases tumour growth

Biologic scaffolds — processed tissues from which the cells have been removed — are finding their place in regenerative medicine. Since they preserve the original extracellular matrix of the tissue, biologic scaffolds promote cell attachment and proliferation, enhancing tissue regeneration. Consequently, there is considerable research underway to study their use in different clinical areas.

Biologic scaffolds are also employed in tumour treatments, where tumour resection requires healing  of tissue voids. However, it was unclear whether their use had a negative or positive effect on tumour re-growth. To investigate this issue, a research team led by Jennifer Elisseeff at John Hopkins School of Medicine implanted a biologic scaffold together with cancer cell lines in mice and studied the response.  They found that the immune environment created by the scaffold impaired tumour growth (Sci. Transl. Med. 10.1126/scitranslmed.aat7973).

Triggering a specific immune response

The researchers tested a porcine urinary bladder matrix (UBM), a biologic scaffold that is approved by the US Food and Drug Administration for wound healing applications. They mixed particles of UBM with melanoma, colon or breast cancer cell lines and subsequently injected them in mice. In all cases, tumour growth decreased and animal survival increased, compared with injection of the cells alone. Since these scaffolds did not have any cytotoxic effect on the cells, the researchers inferred that the biologic scaffold was triggering an immune response that impaired tumour growth.

The team’s next step was to find out more about this specific immune response. Employing genetically modified immunodeficient mice, the researchers identified CD4 T cells (also called T helper cells, responsible for adaptative immunity) and macrophages as playing the main role in tumour growth inhibition.

Furthermore, they also identified the immune cells’ specific phenotype, and found that it was different to that normally observed in immune cells associated with tumours. In addition, this immune response was completely opposite to that observed with synthetic immune adjuvant materials (such as aluminium hydroxide and silica particles), confirming the uniqueness of the immune response produced by the UBM.

Synergy with immunotherapy

Finally, the researchers combined the UBM with immunotherapy (immune checkpoint inhibitors PD-1, PD-L1 and PD-L2). Such treatments activate the immune system against tumours and could amplify the growth inhibition from UBM.

Synergy with immunotherapy

Indeed, combining UBM with PD-1 or PD-L1 noticeably slowed down tumour growth and increased mice survival. This finding points to the potential of using this biologic scaffold in combination with immunotherapy for tumour treatment.

The team concluded that the immune response generated by the UBM has a pro-healing character that inhibits tumour growth. In addition, such impairment in cancer proliferation can amplify the effect of immunotherapy. However, despite these remarkable results, the researchers point out that biologic scaffolds are complex materials that differ in composition and source, and therefore the findings of this project may not be applicable to all biologic scaffold types. Nevertheless, this study sets a precedent for the use of biologic scaffolds in the treatment of different cancers.

Kenya eyes locations for the country’s first observatory

Researchers in Kenya are scouting locations for what would be the country’s first astronomical observatory. Mount Nyiro and Mount Kulal, both of which are in north-western Kenya near Lake Turkana, are being eyed as potential sites.  A decision is expected to be made within the next two years with the observatory possibly coming online within the next 5-10 years if given the green light by the government.

Many Kenyan astronomers returning from abroad went back and took up jobs elsewhere

Paul Baki

Kenya is situated on the equator and can access more than 85% of the sky in both northern and southern celestial hemispheres. The country has a climate that makes it ideal for astronomical observations with little light pollution and clear skies for most of the year.

Paul Baki, an astronomer from the Technical University of Kenya in Nairobi who is part of the working group to establish the observatory, says that Kenyans training in local universities currently do not have access to a research-class telescope. “Many Kenyan astronomers returning from abroad went back and took up jobs elsewhere,” says Baki, adding that the observatory will hopefully stem this brain drain by supporting training and research in the country as well as technology development.

Economic boost

At a meeting in early February at the Technical University of Kenya, staff and students discussed what engineering skills would be necessary to build an optical telescope at the observatory and how to take the project forward.

Searching for a potential site has currently been done via satellite data, but now ground-based weather facilities will be used for further analysis. This work is being carried out together with South Africa and the UK, which has given the project £140 000 from the UK Research and Innovation’s global challenges research fund.

Martyn Wells, an optical engineer from the UK Astronomy Technology Centre, based at the Royal Observatory in Edinburgh, says that the sites that have been shortlisted are in economically neglected parts of the country, which could benefit by “significant astro-tourism”.

Hachimoji DNA doubles the genetic code

Researchers in the US have built an “alien” DNA system from eight building block letters, so expanding the genetic code from four and doubling its information density. The new system meets all of the requirements for Darwinian evolution and can also be transcribed to RNA. It will be important for future synthetic biology applications and expands the scope of molecular structures that might be capable of supporting life, both here on Earth and more widely in the universe.

One of the main characteristics of life is that it can store and pass on genetic information. In modern-day organisms, this is done by DNA using just four building blocks: guanine, cytosine, adenine and thymine (G, A, C and T). Pairs of DNA strands form a double helix with A bonding to T and C bonding to G.

Four more building blocks

The researchers, led by Steven Benner of Firebird Biomolecular Sciences LLC and the Foundation for Applied Molecular Evolution, both in Alachua, Florida, have now used organic chemistry to design and make four more such building blocks that fit the size and shape of the G:C and A:T pairs and bind with them. These building blocks are P and B, which are analogues of purine, and Z and S, which are analogues of pyrimidine. These duplexes form P:Z and B:S pairs.

“We made several hundred molecules of this new synthetic genetic system and studied their ability to bind to their complementary genetic molecules,” explains Benner. “This led to rules that predict how well a sequence of synthetic building blocks, say GACTZPSB, bind to a complementary CTGAPZBS sequence.”

The researchers called their eight-letter synthetic genetic system “hachimoji” DNA (“hachi” means eight in Japanese and “moji” letter).

Hachimoji DNA also supports life

Like natural DNA, hachimoji DNA supports life in that it pairs in a predictable way and copies to make a hachimoji RNA. RNA is important for life since it is via this molecule that DNA transfers information before it is sent to proteins.

Benner and colleagues say that the new DNA also, importantly, meets the “Schrödinger requirement” for a Darwinian system of molecular evolution – an important hallmark for supporting life. “Erwin Schrödinger is best known for having created quantum chemistry, but later in his life, like many physicists, he became interested in evolution,” explains Benner. “He noted that to store information, a genetic material must have different building blocks, just like an alphabet must have different letters.”

From a physics perspective, however, these building blocks must be able to replace each other without geometrically disrupting the size or shape of the double helix to support evolution. “Our extra nucleotide ‘letters’ are designed in this way.”

Engineering enzymes to transcribe DNA into RNA

To transcribe hachimoji DNA into RNA, the researchers adapted a natural enzyme (T7 polymerase) so that it could accept unnatural genetic molecules. This is one of the main challenges when working with such synthetic DNA systems, says Benner. “Our colleague Andrew Ellington and his team at the University of Texas at Austin re-designed the T7 polymerase, which transcribes natural DNA to natural RNA, by changing amino acids in the protein and finding ones that accept hachimoji DNA to make hachimoji RNA.”

The work tells us much about what chemistry is required to support genetics, he tells Physics World. It suggests that DNA-based life forms other than those that we know on Earth may exist in the Universe. This might be important for when it comes to searching for exobiological “signatures”, he says.

It is wrong to say that hachimoji DNA is alien life though, he insists. For that, the system must also be self-sustaining and hachimoji DNA needs a steady supply of the lab-created building blocks and proteins. “As none of these are available outside, hachimoji DNA can go nowhere if it escapes the laboratory.”

Many potential applications

The potential applications are many, according to the team. “Hachimoji DNA could be used to develop clean diagnostics for human diseases, in retrievable molecular information storage, barcoding, self-assembling nanostructures, and to make proteins with extra amino acids as well as novel drugs. Parts of this DNA are already being commercially produced by Firebird.

The researchers, reporting their work in Science 10.1126/science.aat0971, say they are now busy working on engineering bacteria that accept synthetic genetic systems.

Correlations between protons and neutrons may explain 35-year-old nuclear mystery

According to the classical model of nuclear structure, the internal structure of nucleons should not change if they are bound into atomic nuclei. But it was discovered 35 years ago that quarks inside free protons and neutrons behave differently to those bound into nuclei – and the cause has remained a mystery ever since. Now researchers have taken a significant step towards solving the puzzle by using two different types of scattering experiments to relate the strength of the effect to the number of high-momentum nucleon pairs in an atomic nucleus.

According to the standard model of particle physics, the energies binding quarks inside protons and neutrons are almost 100 times larger than the energies binding those nucleons inside nuclei. In the 1980s, therefore, researchers at the European Muon Collaboration (EMC) at CERN assumed they could safely speed up their nucleon-probing experiments by using iron nuclei containing many nucleons.

“They thought there was no way the relatively small binding energy of an iron nucleus would have any effect on the quarks,” explains nuclear physicist Axel Schmidt of Massachusetts Institute of Technology.” To their surprise, however, they found that the collision cross-section per nucleon was lower than expected, implying that the momentum distribution of the quarks had changed.

This has subsequently been confirmed in other experiments in other atoms: “Every single nucleus that’s subsequently been checked has some degree of this effect,” says Schmidt, “And the bigger the nucleus, the bigger the effect.”

Alternative theory challenges the mainstream

The cause of the so-called EMC effect remains unexplained, however. “Many theories have been rejected by experiment,” says Schmidt. “Now there are really only two surviving classes of theory. I would say the mainstream view is that something about the environment of the nucleus changes the quark distribution of all the constituent protons and neutrons.”

The other theory relates the effect to transitory correlations between nucleons, which lead them to have much higher momentum than other nucleons. “Perhaps 80% of the protons and neutrons are unchanged and 20% are changed dramatically and, when we measure the EMC effect, we’re measuring the aggregate,” he says.

Schmidt and colleagues from the CLAS collaboration, which is based at the Jefferson Laboratory in the US, decided to test this hypothesis. They reasoned that “if the effect is due to pairs, then every proton–neutron pair should be like every other – whether that pair is in a carbon or a lead nucleus,” says Schmidt.

The team found a neat way to directly measure both the proportion of high-momentum pairs in nuclei and their collision cross-sections at the same time. They irradiated targets of carbon-12, aluminium-27, iron-56 and lead-208 with high-energy electrons. Some of these electrons underwent quasi-elastic collisions with the nuclei, knocking out a proton or neutron but leaving the constituent quarks undisturbed. From these, the researchers could reconstruct the momentum of the nucleon at the moment of impact.

Other electrons underwent deep inelastic scattering, obliterating a nucleon and allowing the researchers to infer the average momentum distribution of the quarks inside the nucleons. Sure enough, the number of pairs in a nucleus was linearly proportional to the strength of the EMC effect, suggesting all pairs had the same effect.

Next, the researchers made a prediction. Protons are far more likely to form high-momentum pairs with neutrons, and vice versa. Therefore, in heavy, neutron-rich nuclei, protons are more likely to be paired at any instant, making their average momentum higher – a prediction that the CLAS collaboration confirmed last year. The researchers therefore calculated average EMC effects per proton and per neutron.

As predicted, the EMC effect per neutron stayed roughly constant at atomic masses beyond 12, but the EMC effect per proton continued to increase for all measured nuclei. “We think that this lends credence to the alternative hypothesis and suggests that we need to do follow-up experiments specifically to look at the quarks inside correlated nucleons, rather than just at the nucleus in aggregate,” says Schmidt. “There’s one we’ve just started and another scheduled for a few years from now.”

“As far as deep inelastic scattering results go, the mean-field hypothesis works probably as well as the short-range correlations hypothesis,” says Gerald Miller of the University of Washington at Seattle. “The fact that the researchers can independently measure the number of neutron–proton pairs interacting and establish a correlation is a big step. That said, it’s not quite nailed down yet because the alternative hypothesis has not yet been used to calculate this second set of kinematics. That presents a challenge that proponents of the alternative hypothesis need to meet.”

The research is published in Nature.

Parenthood drives women out of science, US survey reveals

Almost half of women and a quarter of men leave full-time science-based careers after becoming a parent. That is according to a study that followed the careers of more than 4000 US-based science, technology, engineering and mathematics (STEM) professionals over an eight-year period. The research is the first to quantify the challenge of balancing parenting with STEM work and how it can contribute to the gender gap in science.

Women are underrepresented across STEM and face a variety of cultural and structural disadvantages to overcome. To investigate the specific effect of parenthood on STEM employment, sociologist Erin Cech from the University of Michigan and colleagues used data from a US-wide survey carried out by the National Science Foundation that followed full-time, initially childless STEM professionals over an eight-year period.

They were first surveyed about their circumstances in 2003 and then again in 2006, 2008 and 2010. The dataset included more than 3300 STEM professionals who remained childless over that period and around 800 who became parents within the first three years of the study period.

It is clear that without a significant change in culture and climate, policies alone will not eliminate all the major barriers for women

Miriam Deutsch

The study found that new parents were significantly less likely to remain in full-time STEM jobs than those that remained childless. Within three years of the birth or adoption of their first child, 42% of mothers and 15% of fathers had left full-time STEM employment. By the final survey in 2010, 43% and 23% of new mothers and fathers, respectively, had moved on, with 12% of new mothers and 18% of new fathers switching to full-time jobs outside of STEM. Both were significantly more likely to have changed career than childless respondents.

When those who had shifted to full-time, non-STEM employment were asked why they had quit science, 48% of fathers and 71% of mothers said the move was “family-related”, compared to 4% of those without children. After their first child, 6% of women also switched to part-time, non-STEM careers, compared with 0.5% of men. The study also found that some new parents stayed in science but cut their hours with 11% of mothers and 2% of fathers switching to part-time STEM employment, while 15% of mothers and 3% of fathers were not working by the time of the final survey in 2010.

Removing barriers

The researchers say that STEM fields must do more to address the issues parents have balancing their work with childcare. “This is a problem for science because these new parents who leave are highly trained and have experience in the workforce,” Cech told Physics World. “Their departure means a loss of knowledge and expertise that is disadvantageous for innovation and scientific inquiry.”

This issue is more pronounced for women, adds Cech, because “mothers still shoulder – and are culturally expected to shoulder – the lion’s share of childcare responsibilities”.  Cech adds that culture, institutional policies and organizational practices all need to be looked at to tackle the issue. “Policies like paid caregiver leave that are more inclusive – that include both parents – would help parents share the burden of early caregiving responsibilities,” she says. “Organizations that employ STEM professionals need to carefully consider how their policies and practices around workers with caregiving responsibilities may be squeezing out those professionals. More flexible work practices – and workplace cultures that support rather than stigmatize the use of those policies – would also help.”

Miriam Deutsch, chair of the American Physical Society’s committee on the status of women in physics, told Physics World that beyond policies to remove unnecessary, tangible obstacles, there needs to be a deeper cultural shift to improve the lives of women and mothers in STEM. “The [research] mentions various cultural and social pressures, such as the view that parents are less devoted to their careers than non-parents,” she explains. “How to impart this cultural shift is the million-dollar question. But it is clear that without a significant change in culture and climate, policies alone will not eliminate all the major barriers for women.”

Democracy under threat, a mathematician’s perspective

In this episode of Physics World Weekly we’re looking at how applied mathematics can help to understand how instabilities arise in democracies. You will hear an extended interview with Karoline Wiesner from the University of Bristol, who uses complex systems theory to analyse human systems.

Wiesner was the lead author on this recent paper in the European Journal of Physics, which applied a complex systems approach to evaluating the idea that democracy is under threat. Wiesner is interviewed by Physics World’s general physics editor Hamish Johnston who reported on the original paper at the end of last year.

If you enjoy what you hear, you can subscribe to Physics World Weekly via the Apple podcast app or your chosen podcast host.

 

Copyright © 2025 by IOP Publishing Ltd and individual contributors