Join us for an insightful webinar based on Women and Physics (Second Edition), where we will explore the historical journey, challenges, and achievements of women in the field of physics, with a focus on English-speaking countries. The session will dive into various topics such as the historical role of women in physics, the current statistics on female representation in education and careers, navigating family life and career, and the critical role men play in fostering a supportive environment. The webinar aims to provide a roadmap for women looking to thrive in physics.
Laura McCullough is a professor of physics at the University of Wisconsin-Stout. Her PhD from the University of Minnesota was in science education with a focus on physics education research. She is the recipient of multiple awards, including her university system’s highest teaching award, her university’s outstanding research award, and her professional society’s service award. She is a fellow of the American Association of Physics Teachers. Her primary research area is gender and science and surrounding issues. She has also done significant work on women in leadership, and on students with disabilities.
About this ebook
Women and Physics is the second edition of a volume that brings together research on a wide variety of topics relating to gender and physics, cataloguing the extant literature to provide a readable and concise grounding for the reader. While there are many biographies and collections of essays in the area of women and physics, no other book is as research focused. Starting with the current numbers of women in physics in English-speaking countries, it explores the different issues relating to gender and physics at different educational levels and career stages. From the effects of family and schooling to the barriers faced in the workplace and at home, this volume is an exhaustive overview of the many studies focused specifically on women and physics. This edition contains updated references and new chapters covering the underlying structures of the research and more detailed breakdowns of career issues.
In August 2024 the influential Australian popular-science magazine Cosmos found itself not just reporting the news – it had become the news. Owned by CSIRO Publishing – part of Australia’s national science agency – Cosmos had posted a series of “explainer” articles on its website that had been written by generative artificial intelligence (AI) as part of an experiment funded by Australia’s Walkley Foundation. Covering topics such as black holes and carbon sinks, the text had been fact-checked against the magazine’s archive of more than 15,000 past articles to negate the worry of misinformation, but at least one of the new articles contained inaccuracies.
Critics, such as the science writer Jackson Ryan, were quick to condemn the magazine’s experiment as undermining and devaluing high-quality science journalism. As Ryan wrote on his Substack blog, AI not only makes things up and trains itself on copyrighted material, but “for the most part, provides corpse-cold, boring-ass prose”. Contributors and former staff also complained to Australia’s ABC News that they’d been unaware of the experiment, which took place just a few months after the magazine had made five of its eight staff redundant.
It’s all too easy for AI to get things wrong and contribute to the deluge of online misinformation
The Cosmos incident is a reminder that we’re in the early days of using generative AI in science journalism. It’s all too easy for AI to get things wrong and contribute to the deluge of online misinformation, potentially damaging modern society in which science and technology shape so many aspects of our lives. Accurate, high-quality science communication is vital, especially if we are to pique the public’s interest in physics and encourage more people into the subject.
Kanta Dihal, a lecturer at Imperial College London who researchers the public’s understanding of AI, warns that the impacts of recent advances in generative AI on science communication are “in many ways more concerning than exciting”. Sure, AI can level the playing field by, for example, enabling students to learn video editing skills without expensive tools and helping people with disabilities to access course material in accessible formats. “[But there is also] the immediate large-scale misuse and misinformation,” Dihal says.
We do need to take these concerns seriously, but AI could benefit science communication in ways you might not realize. Simply put, AI is here to stay – in fact, the science behind it led to the physicist John Hopfield and computer scientist Geoffrey Hinton winning the 2024 Nobel Prize for Physics. So how can we marshal AI to best effect not just to do science but to tell the world about science?
Dangerous game
Generative AI is a step up from “machine learning”, where a computer predicts how a system will behave based on data it’s analysed. Machine learning is used in high-energy physics, for example, to model particle interactions and detector performance. It does this by learning to recognize patterns in existing data, before making predictions and then validating that those predictions match the original data. Machine learning saves researchers from having to manually sift through terabytes of data from experiments such as those at CERN’s Large Hadron Collider.
Generative AI, on the other hand, doesn’t just recognize and predict patterns – it can create new ones too. When it comes to the written word, a generative AI could, for example, invent a story from a few lines of input. It is exactly this language-generating capability that caused such a furore at Cosmos and led some journalists to worry that AI might one day make their jobs obsolete. But how does a generative AI produce replies that feel like a real conversation?
Child’s play Claude Shannon was an electrical engineer and mathematician who is considered the “father of information theory”. He is pictured here in 1952 with an early example of machine learning – a wheeled toy mouse called Theseus that was designed to navigate its way through a maze. (Courtesy: Yale Joel/The LIFE Picture Collection/Shutterstock)
Perhaps the best known generative AI is ChatGPT (where GPT stands for generative pre-trained transformer), which is an example of a Large Language Model (LLM). Language modelling dates back to the 1950s, when the US mathematician Claude Shannon applied information theory – the branch of maths that deals with quantifying, storing and transmitting information – to human language. Shannon measured how well language models could predict the next word in a sentence by assigning probabilities to each word based on patterns in the data the model is trained on.
Such methods of statistical language modelling are now fundamental to a range of natural language processing tasks, from building spell-checking software to translating between languages and even recognizing speech. Recent advances in these models have significantly extended the capabilities of generative AI tools, with the “chatbot” functionality of ChatGPT making it especially easy to use.
ChatGPT racked up a million users within five days of its launch in November 2022 and since then other companies have unveiled similar tools, notably Google’s Gemini and Perplexity. With more than 600 million users per month as of September 2024, ChatGPT is trained on a range of sources, including books, Wikipedia articles and chat logs (although the precise list is not explicitly described anywhere). The AI spots patterns in the training texts and builds sentences by predicting the most likely word that comes next.
ChatGPT operates a bit like a slot machine, with probabilities assigned to each possible next word in the sentence. In fact, the term AI is a little misleading, being more “statistically informed guessing” than real intelligence, which explains why ChatGPT has a tendency to make basic errors or “hallucinate”. Cade Metz, a technology reporter from the New York Times, reckons that chatbots invent information as much as 27% of the time.
One notable hallucination occurred in February 2023 when Bard – Google’s forerunner to Gemini – declared in its first public demonstration that the James Webb Space Telescope (JWST) had taken “the very first picture of a planet outside our solar system”. As Grant Tremblay from the US Center for Astrophysics pointed out, this feat had been accomplished in 2004, some 16 years before the JWST was launched, by the European Southern Observatory’s Very Large Telescope in Chile.
Badly wrong This AI-generated image of a rat originally appeared in the journal Frontiers in Cell Biology (11 1339390). The use of AI in the image, which bears little resemblance to the anatomy of a rat, was not originally disclosed and the article was subsequently retracted. (CC BY Xinyu Guo, Liang Dong and Dingjun Hao)
Another embarrassing incident was the comically anatomically incorrect picture of a rat created by the AI image generator Midjourney, which appeared in a journal paper that was subsequently retracted. Some hallucinations are more serious. Amateur mushroom pickers, for example, have been warned to steer clear of online foraging guides, likely written by AI, that contain information running counter to safe foraging practices. Many edible wild mushrooms look deceptively similar to their toxic counterparts, making careful identification critical.
By using AI to write online content, we’re in danger of triggering a vicious circle of increasingly misleading statements, polluting the Internet with unverified output. What’s more, AI can perpetuate existing biases in society. Google, for example, was forced to publish an embarrassing apology, saying it would “pause” the ability to generate images with Gemini after the service was used to create images of racially diverse Nazi soldiers,
More seriously, women and some minority groups are under-represented in healthcare data, biasing the training set and potentially skewing the recommendations of predictive AI algorithms. One study led by Laleh Seyyed-Kalantari from the University of Toronto (Nature Medicine27 2176) found that computer-aided diagnosis of chest X-rays are less accurate for Black patients than white patients.
Generative AI could even increase inequalities if it becomes too commercial. “Right now there’s a lot of free generative AI available, but I can also see that getting more unequal in the very near future,” Dihal warns. People who can afford to pay for ChatGPT subscriptions, for example, have access to versions of the AI based on more up-to-date training data. They therefore get better responses than users restricted to the “free” version.
Clear communication
But generative AI tools can do much more than churn out uninspired articles and create problems. One beauty of ChatGPT is that users interact with it conversationally, just like you’d talk to a human communicator at a science museum or science festival. You could start by typing something simple (such as “What is quantum entanglement?”) before delving into the details (e.g. “What kind of physical systems are used to create it?”). You’ll get answers that meet your needs better than any standard textbook.
Opening up AI could help students, particularly those who face barriers to education, to explore scientific topics that interest them. (Courtesy: iStock/Imgorthand)
Generative AI could also boost access to physics by providing an interactive way to engage with groups – such as girls, people of colour or students from low-income backgrounds – who might face barriers to accessing educational resources in more traditional formats. That’s the idea behind online tuition platforms such as Khan Academy, which has integrated a customized version of ChatGPT into its tuition services.
Instead of presenting fully formed answers to questions, its generative AI is programmed to prompt users to work out the solution themselves. If a student types, say, “I want to understand gravity” into Khan’s generative AI-powered tutoring program, the AI will first ask what the student already knows about the subject. The “conversation” between the student and the chatbot will then evolve in the light of the student’s response.
As someone with cerebral palsy, AI has transformed how I work by enabling me to turn my speech into text in an instant
AI can also remove barriers that some people face in communicating science, allowing a wider range of voices to be heard and thereby boosting the public’s trust in science. As someone with cerebral palsy, AI has transformed how I work by enabling me to turn my speech into text in an instant (see box below).
It’s also helped Duncan Yellowlees, a dyslexic research developer who trains researchers to communicate. “I find writing long text really annoying, so I speak it into OtterAI, which converts the speech into text,” he says. The text is sent to ChatGPT, which converts it into a blog. “So it’s my thoughts, but I haven’t had to write them down.”
Then there’s Matthew Tosh, a physicist-turned-science presenter specializing in pyrotechnics. He has a progressive disease, which meant he faced an increasing struggle to write in a concise way. ChatGPT, however, lets him create draft social-media posts, which he then rewrites in his own sites. As a result, he can maintain that all-important social-media presence while managing his disability at the same time.
Despite the occasional mistake made by generative AI bots, misinformation is nothing new. “That’s part of human behaviour, unfortunately,” Tosh admits. In fact, he thinks errors can – perversely – be a positive. Students who wrongly think a kilo of cannonballs will fall faster than a kilo of feathers create the perfect chance for teachers to discuss Newtonian mechanics. “In some respects,” says Tosh, “a little bit of misinformation can start the conversation.”
AI as a voice-to-text tool
Reaping the benefits Claire Malone uses AI-powered speech-to-text software, which helps her work as a science communicator. (Courtesy: Claire Malone)
As a science journalist – and previously as a researcher hunting for new particles in data from the ATLAS experiment at CERN – I’ve longed to use speech-to-text programs to complete assignments. That’s because I have a disability – cerebral palsy – that makes typing impractical. For a long time this meant I had to dictate my work to a team of academic assistants for many hours a week. But in 2023 I started using Voiceitt, an AI-powered app optimized for speech recognition for people with non-standard speech like mine.
You train the app by first reading out a couple of hundred short training phrases. It then deploys AI to apply thousands of hours of other non-standard speaker models in its database to optimize its training. As Voiceitt is used, it continues refining the AI model, improving speech recognition over time. The app also has a generative AI model to correct any grammatical errors created during transcription. Each week, I find myself correcting the app’s transcriptions less and less, which is a bonus when facing journalistic deadlines, such as the one for this article.
The perfect AI assistant?
One of the first news organizations to experiment with AI tools was Associated Press (AP), which in 2014 began automating routine financial stories about corporate earnings. AP now also uses AI to create transcripts of videos, write summaries of sports events, and spot trends in large stock-market data sets. Other news outlets use AI tools to speed up “back-office” tasks such as transcribing interviews, analysing information or converting data files. Tools such as MidJourney can even help journalists to brief professional illustrators to create images.
However, there is a fine line between using AI to speed up your workflow and letting it make content without human input. Many news outlets and writers’ associations have issued statements guaranteeing not to use generative AI as a replacement for human writers and editors. Physics World, for example, has pledged not to publish fresh content generated purely by AI, though the magazine does use AI to assist with transcribing and summarizing interviews.
So how can generative AI be incorporated into the effective and trustworthy communication of science? First, it’s vital to ask the right question – in fact, composing a prompt can take several attempts to get the desired output. When summarizing a document, for example, a good prompt should include the maximum word length, an indication of whether the summary should be in paragraphs or bullet points, and information about the target audience and required style or tone.
Generative AI is here to stay – and science communicators and journalists are still working out how best to use it to communicate science
Second, information obtained from AI needs to be fact checked. It can easily hallucinate, making a chatbot like an unreliable (but occasionally brilliant) colleague who can get the wrong end of the stick. “Don’t assume that whatever the tool is, that it is correct,” says Phil Robinson, editor of Chemistry World. “Use it like you’d use a peer or colleague who says ‘Have you tried this?’ or ‘Have you thought of that?’”
Finally, science communicators must be transparent in explaining how they used AI. Generative AI is here to stay – and science communicators and journalists are still working out how best to use it to communicate science. But if we are to maintain the quality of science journalism – so vital for the public’s trust in science – we must continuously evaluate and manage how AI is incorporated into the scientific information ecosystem.
Generative AI can help you say what you want to say. But as Dihal concludes: “It’s no substitute for having something to say.”
The most important and pressing issue of our times is the transition to clean energy while meeting rising global demand. Cheap, abundant and reliable energy underpins the quality of life for all – and one potentially exciting way to do this is space-based solar power (SBSP). It would involve capturing sunlight in space and beaming it as microwaves down to Earth, where it would be converted into electricity to power the grid.
For proponents of SBSP such as myself, it’s a hugely promising technology. Others, though, are more sceptical. Earlier this year, for example, NASA published a report from its Office of Technology, Policy and Strategy that questioned the cost and practicality of SBSP. Henri Barde, a retired engineer who used to work for the European Space Agency (ESA) in Noordwijk, the Netherlands, has also examined the technical challenges in a report for the IEEE.
Some of these sceptical positions on SBSP were addressed in a recent Physics World article by James McKenzie. Conventional solar power is cheap, he argued, so why bother putting large solar power satellites in space? After all, the biggest barriers to building more solar plants here on Earth aren’t technical, but mostly come in the form of belligerent planning officials and local residents who don’t want their views ruined.
However, in my view we need to take a whole-energy-system perspective to see why innovation is essential for the energy transition. Wind, solar and batteries are “low-density” renewables, requiring many tonnes of minerals to be mined and refined for each megawatt-hour of energy. How can this be sustainable and give us energy security, especially when so much of our supply of these minerals depends on production in China?
Low-density renewables also require a Herculean expansion in electricity grid transmission pylons and cables to connect them to users. Other drawbacks of wind and solar is that they depend on the weather and require suitable storage – which currently does not exist at the capacity or cost needed. These forms of energy also need duplicated back-up, which is expensive, and other sources of baseload power for times when it’s cloudy or there’s no wind.
Look to the skies
With no night or weather in space, however, a solar panel in space generates 13 times as much energy than the same panel on Earth. SBSP, if built, would generate power continuously, transmitted as microwaves through the atmosphere with almost no loss. It could therefore deliver baseload power 24 hours a day, irrespective of local weather conditions on Earth.
SBSP could easily produce more or less power as needed, effectively smoothing out the unpredictable and varying output from wind and solar
Another advantage of SBSP is that could easily produce more or less power as needed, effectively smoothing out the unpredictable and varying output from wind and solar. We currently do this using fossil-fuel-powered gas-fired “peaker” plants, which could therefore be put out to pasture. SBSP is also scalable, allowing the energy it produces to be easily exported to other nations without expensive cables, giving it a truly global impact.
A recent whole-energy-system study by researchers at Imperial College London concluded that introducing just 8 GW of SBSP into the UK’s energy mix would deliver system savings of over £4bn every year. In my view, which is shared by others too, the utility of SBSP is likely to be even greater when considering whole continents or global alliances. It can give us affordable and reliable clean energy.
My firm, Space Solar, has designed a solar-power satellite called CASSIOPeiA, which is more than twice as powerful – based on the key metric of power per unit mass – as ESA’s design. So far, we have built and successfully demonstrated our power beaming technology, and following £5m of engineering design work, we have arguably the most technically mature design in the world.
If all goes to plan, we’ll have our first commercial product by 2029. Offering 30 MW of power, it could be launched by a single Starship rocket, and scale to gigawatt systems from there. Sure, there are engineering challenges, but these are mostly based on ensuring that the economics remain competitive. Space Solar is also lucky in having world-class experts working in spacecraft engineering, advanced photovoltaics, power beaming and in-space robotics.
Brighter and better
But why then was NASA’s study so sceptical of SBSP? I think it was because the report made absurdly conservative assumptions of the economics. NASA assumed an operating life of only 10 years: so to run for 30 years, the whole solar power satellite would have to be built and launched three times. Yet satellites today generally last for more than 25 years, with most baselined for a minimum 15 year life.
The NASA report also assumed that a satellite launched by Starship would remain at around $1500/kg. However, other independent analyses, such as “Space: the dawn of a new age” produced in 2022 by Citi Group, have forecast that it will be an order of magnitude less – just at $100/kg – by 2040. I could go on as there are plenty more examples of risk-averse thinking in the NASA report.
Buried in the report, however, the study also looked at more reasonable scenarios than the “baseline” and concluded that “these conditions would make SBSP systems highly competitive with any assessed terrestrial renewable electricity production technology’s 2050 cost projections”. Curiously, these findings did not make it into the executive summary.
The NASA study has been widely criticized, including by former NASA physicist John Mankins, who invented another approach to space solar dubbed SPS Alpha. Speaking on a recent episode of the DownLink podcast, he suspected NASA’s gloomy stance may in part be because it focuses on space tech and space exploration rather than energy for Earth. NASA bosses might fear that if they were directed by Congress to pursue SBSP, money for other priorities might be at risk.
I also question Barde’s sceptical opinion of the technology of SBSP, which he expressed in an article for IEEE Spectrum. Barde appeared not to understand many of the design features that make SPBSP technically feasible. He wrote, for example, about “gigawatts of power coursing through microwave systems” of the solar panels on the satellite, which sounds ominous and challenging to achieve.
In reality, the gigawatts of sunlight are reflected onto a large area of photovoltaics containing a billion or so solar cells. Each cell, which includes an antenna and electronic components to convert the sunlight into microwaves, is arranged in a sandwich module just a few millimetres thick handling just 2 W of power. So although the satellite delivers gigawatts overall, the figure is much lower at the component level. What’s more, each cell can be made using tried and tested radio-frequency components.
As for Barde’s fears about thermal management – in other words, how we can stop the satellite from overheating – that has already been analysed in detail. The plan is to use passive radiative cooling without active systems. Barde also warns of temperature swings as the satellites pass through eclipse during the spring and autumn equinox. But this problem is common to all satellites and has, in any case, been analysed as part of our engineering work. In essence, Barde’s claim of “insurmountable technical difficulties” is simply his opinion.
Until the first solar power satellite is commissioned, there will always be sceptics [but] that was also true of reusable rockets and cubesats, both of which are now mainstream technology
Until the first solar power satellite is commissioned, there will always be sceptics of what we are doing. However, that was also true of reusable rockets and cubesats, both of which are now mainstream technology. SBSP is a “no-regrets” investment that will see huge environmental and economic benefits, with spin-off technologies in wireless power beaming, in-space assembly and photovoltaics.
It is the ultimate blend of space technology and societal benefit, which will inspire the next generation of students into physics and engineering. Currently, the UK has a leadership position in SBSP, and if we have the vision and ambition, there is nothing to lose and everything to gain from backing this. We just need to get on with the job.
Hypothetical particles called axions could form dense clouds around neutron stars – and if they do, they will give off signals that radio telescopes can detect, say researchers in the Netherlands, the UK and the US. Since axions are a possible candidate for the mysterious substance known as dark matter, this finding could bring us closer to understanding it.
Around 85% of the universe’s mass consists of matter that appears “dark” to us. We can observe its gravitational effect on structures such as galaxies, but we cannot observe it directly. This is because dark matter hardly interacts with anything as far as we know, making it very difficult to detect. So far, searches for dark matter on Earth and in space have found no evidence for any of the various dark matter candidates.
The new research raises hopes that axions could be different. These neutral, bosonic particles are extremely light and hardly interact with ordinary matter. They get their name from a brand of soap, having been first proposed in the 1970s as a way of “cleaning up” a problem in quantum chromodynamics (QCD). More recently, astronomers have suggested they could clean up cosmology, too, by playing a role in the formation of galaxies in the early universe. They would also be a clean start for particle physics, providing evidence for new physics beyond the Standard Model.
Signature signals
But how can we detect axions if they are almost invisible to us? In the latest work, researchers at the University of Amsterdam, Princeton University and the University of Oxford showed that axions, if they exist, will be produced in large quantities at the polar regions of neutron stars. (Axions may also be components of dark matter “halos” believed to be present in the universe, but this study investigated axions produced by neutron stars themselves.) While many axions produced in this way will escape, some will be captured by the stars’ strong gravitational field. Over millions of years, axions will therefore accumulate around neutron stars, forming a cloud dense enough to give off detectable signals.
To reach these conclusions, the researchers examined various axion cloud interaction mechanisms, including self-interaction, absorption by neutron star nuclei and electromagnetic interactions. They concluded that for most axion masses, it is the last mechanism – specifically, a process called resonant axion-photon mixing – that dominates. Notably, this mechanism should produce a stream of low-energy photons in the radiofrequency range.
The team also found that these radio emissions would be connected to four distinct phases of axion cloud evolution. These are a growth phase after the neutron star forms; a saturation phase during normal life; a magnetorotational decay phase towards the later stages of the star’s existence; and finally a large burst of radio waves when the neutron star dies.
Turn on the radio
The researchers say that several large radio telescopes around the globe could play a role in detecting these radiofrequency signatures. Examples include the Low-Frequency Array (LOFAR) in the Netherlands; the Murchison Widefield Array in Australia; and the Green Bank Telescope in the US. To optimize the chances of picking up an axion signal, the collaboration recommends specific observation times, bandwidths and signal-to-noise ratios that these radio telescopes should adhere to. By following these guidelines, they say, the LOFAR setup alone could detect up to four events per year.
Dion Noordhuis, a PhD student at Amsterdam and first author of a Physical Review X paper on the research, acknowledges that there could be other observational signals beyond those explored in the paper. These will require further investigation, and he suggests that a full understanding will require complementary efforts from multiple branches of physics, including particle (astro)physics, plasma physics and observational radioastronomy. “This work thereby opens up a new, cross-disciplinary field with lots of opportunities for future research,” he tells Physics World.
Sankarshana Srinivasan, an astrophysicist from the Ludwig Maximilian University in Munich, Germany, who was not involved in the research, agrees that the QCD axion is a well-motivated candidate for dark matter. The Amsterdam-Princeton-Oxford team’s biggest achievement, he says, is to realize how axion clouds could enhance the signal, while the team’s “state-of-the-art” modelling makes the work stand out. However, he also urges caution because all theories of axion-photon mixing around neutron stars make assumptions about the stars’ magnetospheres, which are still poorly understood.
According to the well-known thought experiment, the infinite monkeys theorem, a monkey randomly pressing keys on a typewriter for an infinite amount of time would eventually type out the complete works of William Shakespeare purely by chance.
Yet a new analysis by two mathematicians in Australia finds that even a troop might not have the time to do so within the supposed timeframe of the universe.
To find out, the duo created a model that includes 30 keys – all the letters in the English language plus punctuation marks. They assumed a constant chimpanzee population of 200,000 could be enlisted to the task, each typing at one key per second until the end of the universe in about 10100 years.
“We decided to look at the probability of a given string of letters being typed by a finite number of monkeys within a finite time period consistent with estimates for the lifespan of our universe,” notes mathematician Stephen Woodcock from the University of Technology Sydney.
The mathematicians found that there is only a 5% chance a single monkey would type “bananas” within its own lifetime of just over 30 years. Yet even with all the chimps feverishly typing away, they would not be able to produce Shakespeare’s entire works (coming in at over 850,000 words) before the universe ends. They would, however, be able to type “I chimp, therefore I am”.
“It is not plausible that, even with improved typing speeds or an increase in chimpanzee populations, monkey labour will ever be a viable tool for developing non-trivial written works,” the authors conclude, adding that while the infinite monkeys theorem is true, it is also “somewhat misleading”, or rather it’s “not to be” in reality.
Molecules known as ligands attach more densely to flatter, platelet-shaped semiconductor nanocrystals than they do to spherical ones – a counterintuitive result that could lead to improvements in LEDs and solar cells as well as applications in biomedicine. While spherical nanoparticles are more curved than platelets, and were therefore expected to have the highest density of ligands on their surfaces, Guohua Jia and colleagues at Australia’s Curtin University say they observed the exact opposite.
“We found that the density of a commonly employed ligand, oleylamine (OLA), on the surface of zinc sulphide (ZnS) nanoparticles is highest for nanoplatelets, followed by nanorods and finally nanospheres,” Jia says.
Colloidal semiconductor nanocrystals show promise for a host of technologies, including field-effect transistors, chemical catalysis and fluorescent biomedical imaging as well as LEDs and photovoltaic cells. Because nanocrystals have a large surface area relative to their volume, their surfaces play an important role in many physical and chemical processes.
Notably, these surfaces can be modified and functionalized with ligands, which are typically smaller molecules such as long-chain amines, thiols, phosphines and phosphonates. The presence of these ligands changes the nanocrystals’ behaviour and properties. For example, they can make the nanocrystals hydrophilic or hydrophobic, and they can change the speed at which charge carriers travel through them. This flexibility allows nanocrystals to be designed and engineered for specific catalytic, optoelectronic or biomedical applications.
In their experiments, Jia and colleagues measured the density of OLA ligands on ZnS nanocrystals using three techniques: thermogravimetric analysis-differential scanning calorimetry; 1H nuclear magnetic resonance spectroscopy; and inductively-coupled plasma-optical emission spectrometry. They combined these measurements with semi-empirical molecular dynamics simulations.
The experiments, which are detailed in the Journal of the American Chemical Society, revealed that Zn nanoplatelets with flat basal planes and uniform surfaces allow more ligands to attach tightly to them. This is because the ligands can stack in a parallel fashion on the nanoplatelets, whereas such tight stacking is more difficult on Zn nanodots and nanorods due to staggered atomic arrangements and multistep on their surfaces, Jia tells Physics World. “This results in a lower ligand density than on nanoplatelets,” he says.
The Curtin researchers now plan to study how the differently-shaped nanocrystals – spherical dots, rods and platelets – enter biological cells. This study will be important for improving the efficacy of targeted drug delivery.
Two independent studies suggest that the brown dwarf Gliese 229 B is not a single object, but rather a pair of brown dwarfs. The two teams reached this conclusion in different ways, with one using a combination of instruments at the European Southern Observatory’s Very Large Telescope (VLT) in Chile, and the other taking advantage of the extreme resolution of the infrared spectra measured by the Keck Observatory in Hawaii.
With masses between those of gas-giant planets and stars, brown dwarfs are too small to reach the extreme temperatures and pressures required to fuse hydrogen in their cores. Instead, a brown dwarf glows as it radiates heat accumulated during the gravitational collapse of its formation. While brown dwarfs are much dimmer than stars, their brightness increases with mass – much like stars.
In 1994, the first brown dwarf ever to be confirmed was spotted in orbit around a red dwarf star. Dubbed Gliese 229 B, the brown dwarf has a methane-rich atmosphere remarkably similar to Jupiter’s – and this was the first planet-like atmosphere observed outside the solar system. The discovery was especially important since it would help astronomers to gain deeper insights into the formation and evolution of massive exoplanets.
Decades-long mystery
Since the discovery, extensive astrometry and radial velocity measurements have tracked Gliese 229B’s gravitational influence on its host star – allowing astronomers to constrain its mass to 71 Jupiter masses. But, this mass seemed too high and sparked a decades-long astronomical mystery.
“This value didn’t make any sense, since a brown dwarf of that mass would be much brighter than Gliese 229 B. Therefore, astronomers got worried that our models of stars and brown dwarfs might be missing something big,” explains Jerry Xuan at the California Institute of Technology (Caltech), who led the international collaboration responsible for one of the studies. Xuan’s team also included Rebecca Oppenheimer – who was part of the team that first discovered Gliese 229 B as a PhD student at Caltech.
Xuan’s team investigated the mass–brightness mystery using separate measurements from two cutting-edge instruments at the VLT: CRIRES+, which is a high-resolution infrared spectrograph and the GRAVITY interferometer.
“CRIRES+ disentangles light from two objects by dispersing it at high spectral resolution, whereas GRAVITY combines light from four different eight metre telescopes to see much finer spatial details than previous instruments can resolve,” Xuan explains. “GRAVITY interferes light from all four of these telescopes to enhance the spatial resolution.”
Time-varying shifts
Meanwhile, a team of US astronomers led by Samuel Whitebrook at the University of California, Santa Barbara (UCSB), studied Gliese 229 B using the Near-Infrared Spectrograph (NIRSPEC) at the Keck Observatory in Hawaii. The extreme resolution of this instrument allowed them to measure time-varying shifts in the brown dwarf’s spectrum, which could hint at an as-yet unforeseen gravitational influence on its orbit.
Within GRAVITY’s combined observations, Xuan’s team discovered that Gliese 229 B was not a single object, but a pair of brown dwarfs that are separated by just 16 Earth–Moon distances and orbit each other every 12 days.
And, after fitting CRIRES+’s data to existing brown dwarf models, they detected features within Gliese 229 B’s spectrum that clearly indicated the presence of two different atmospheres.
Frequency shifts
Whitebrook’s team came to a very similar conclusion. Measuring the brown dwarf’s infrared spectrum at different epochs, they identified frequency shifts which had not shown up in previous measurements. Again, these discrepancies clearly hinted at the presence of a hidden binary companion to Gliese 229B.
The two objects comprising the binary have been named Gliese 229Ba and Gliese 229Bb. Crucially, both of these bodies would be significantly dimmer when compared to a brown dwarf of their combined mass. If the teams’ conclusions are correct, this could finally explain why Gliese 229B is so massive, despite its lacklustre brightness.
The findings also suggest that Gliese 229 B is only the first brown dwarf binary yet discovered. Based on their results, Xuan’s team believe it is likely that binaries of brown dwarfs, and potentially even giant planets like Jupiter, must also exist around other stars. These would provide intriguing targets for future observations.
“Finally, our findings also show how complex and messy the star formation process is,” Xuan says. “We should always be open to surprises, after all, the solar system is only one system in billions of stellar systems in the Milky Way galaxy.”
Every PhD student has been warned at least once that doing a PhD is stressful, and that writing a thesis can make you thoroughly fed up, even if you’re working on a topic you’re passionate about.
When I was coming to the end of my PhD, this thought began to haunt me. I was enjoying my research on the interaction between light and plasmonic metamaterials, but I worried that the stress of writing my thesis would spoil it for me. Perhaps guided by this fear, I started logging my writing activity in a spreadsheet. I recorded how many hours per day I spent writing and how many pages and figures I had completed at the end of each day.
The immediate benefit was that the spreadsheet granted me a quick answer when, once a week, my supervisor asked me the deeply feared question: “So, how many pages?” Probably to his great surprise, my first answer was “Nine cups of espresso.”
In Naples, Italy, we have a relationship with coffee that borders on religious
The idea of logging my writing activity probably came from my background as an experimental physicist, but the use of espresso cups as a unit goes back to my roots in Naples, Italy. There, we have a relationship with coffee that borders on religious. And so, in a difficult time, I turned to the divine and found my strength in the consumption of coffee.
Scientific method When he was writing his PhD thesis, Vittorio Aita logged his progress in hours, pages and cups of espresso. (Courtesy: Vittorio Aita)
As well as tracking my writing, I also recorded the number of cups of espresso I drank each day. The data I gathered, which is summarized in the above graph, turned out to be quite insightful. Let’s get scientific:
I began writing my thesis on 27 April 2023. As shown by the spacing between entries in the following days, I started at a slow pace, dedicating myself to writing for only two days a week and consuming an average of three units of coffee per day. I should add that it was quite easy to “write” 16 pages on the first day because at the start of the process, you get a lot of pages free. Don’t underestimate the joy of realizing you’ve written 16 pages at once, even if those are just the table of contents and other placeholders.
In the second half of May, there was a sudden, two-unit increase in daily coffee consumption, with a corresponding increase in the number of pages written. Clearly by the sixth entry of my log, I was starting to feel like I wasn’t writing enough. This called for more coffee, and my productivity consequently peaked at seven pages in one day. By the end of May, I had already written almost 80 pages.
Readers with an eye for detail will also notice that on the second to last day of May, coffee consumption is not expressed as an integer. To explain this, I must refer again to my Italian background. Although I chose to define the unit of coffee by volume – a unit of espresso is the amount obtained from a reusable capsule, the half-integer value is representative of the importance of the quality of the grind. I had been offered a filtered coffee that my espresso-based cultural heritage could not consider worth a whole unit. Apologies to filter coffee drinkers.
From looking at the graph entries between the end of May and the middle of August, you would be forgiven for thinking that I took a holiday, despite my looming deadline. You would however be wrong. My summer break from the thesis was spent working on a paper.
However, in the last months of work, my slow-paced rhythm was replaced by a full-time commitment to my thesis. Days of intense writing (and figure-making!) were interspersed with final efforts to gather new data in the lab.
In October some photons from the end of the tunnel started to be detectable, but at this point I unfortunately caught COVID-19. As you can tell from the graph, in the last weeks of writing I worked overtime to get back on track. This necessitated a sudden increase in coffee units: having one more unit of coffee each day got me through a week of very long working days, peaking at a single day of 16 hours of work and 6 cups of espresso.
I felt suddenly lighter and I was filled with a deep feeling of fulfilment
I finally submitted my thesis on 20 December, and I did it with one of the most important people in my life at my side: my grandma. I clicked “send” and hugged her for as long as we both could breathe. I felt suddenly lighter and I was filled with a deep feeling of fulfilment. I had totalled 304 hours of writing, 199 pages and an impressive 180 cups of espresso.
With hindsight, this experience taught me that the silly and funny task of logging how much coffee I drank was in fact a powerful tool that stopped me from getting fed up with writing.
More often than not, I would observe the log after a day of what felt like slow progress and realize that I had achieved more than I thought. On other days, when I was disappointed with the number of pages I had written (once even logging a negative number), the amount of coffee I had consumed would remind me of how challenging they had been to complete.
Doing a PhD can be an emotional experience, particularly when writing up the thesis: the self-realization, the pride, the constant need to improve your work, and the desire to convey the spark and pull of curiosity that first motivated you. This must all be done in a way that is both enjoyable to read and sufficiently technical.
All of this can get frustrating, but I hope sharing this will help future students embrace the road to achieving a PhD. Don’t take yourself too seriously and keep looking for the fun in what you do.
Physicists and others with STEM backgrounds are sought after in industry for their analytical skills. However, traditional training in STEM subjects is often lacking when it comes to nurturing the soft skills that are needed to succeed in managerial and leadership positions.
Our guest in this podcast is Peter Hirst, who is Senior Associate Dean, Executive Education at the MIT Sloan School of Management. He explains how MIT Sloan works with executives to ensure that they efficiently and effectively acquire the skills and knowledge needed to be effective leaders.
This podcast is sponsored by the MIT Sloan School of Management
New field experiments carried out by physicists in California’s Sierra Nevada mountains suggest that intermittent bursts of embers play an unexpectedly large role in the spread of wildfires, calling into question some aspects of previous fire models. While this is not the first study to highlight the importance of embers, it does indicate that standard modelling tools used to predict wildfire spread may need to be modified to account for these rare but high-impact events.
Embers form during a wildfire due to a combination of heat, wind and flames. Once lofted into the air, they can travel long distances and may trigger new “spot fires” when they land. Understanding ember behaviour is therefore important for predicting how a wildfire will spread and helping emergency services limit infrastructure damage and prevent loss of life.
Watching it burn
In their field experiments, Tirtha Banerjee and colleagues at the University of California Irvine built a “pile fire” – essentially a bonfire fuelled by a representative mixture of needles, branches, pinecones and pieces of wood from ponderosa pine and Douglas fir trees – in the foothills of the Sierra Nevada mountains. A high-frequency (120 frames per second) camera recorded the fire’s behaviour for 20 minutes, and the researchers placed aluminium baking trays around it to collect the embers it ejected.
After they extinguished the pile fire, the researchers brought the ember samples back to the laboratory and measured their size, shape and density. Footage from the camera enabled them to estimate the fire’s intensity based on its height. They also used a technique called particle tracking velocimetry to follow firebrands and calculate their trajectories, velocities and accelerations.
Highly intermittent ember generation
Based on the footage, the team concluded that ember generation is highly intermittent, with occasional bursts containing orders of magnitude more embers than were ejected at baseline. Existing models do not capture such behaviour well, says Alec Petersen, an experimental fluid dynamicist at UC Irvine and lead author of a Physics of Fluids paper on the experiment. In particular, he explains that models with a low computational cost often make simplifications in characterizing embers, especially with regards to fire plumes and ember shapes. This means that while they can predict how far an average firebrand with a certain size and shape will travel, the accuracy of those predictions is poor.
“Although we care about the average behaviour, we also want to know more about outliers,” he says. “It only takes a single ember to ignite a spot fire.”
As an example of such an outlier, Petersen notes that sometimes a strong updraft from a fire plume coincides with the fire emitting a large number of embers. Similar phenomena occur in many types of turbulent flows, including atmospheric winds as well as buoyant fire plumes, and they are characterized by statistically infrequent but extreme fluctuations in velocity. While these fluctuations are rare, they could partially explain why the team observed large (>1mm) firebrands travelling further than models predict, he tells Physics World.
This is important, Petersen adds, because large embers are precisely the ones with enough thermal energy to start spot fires. “Given enough chances, even statistically unlikely events can become probable, and we need to take such events into account,” he says.
New models, fresh measurements
The researchers now hope to reformulate operational models to do just this, but they acknowledge that this will be challenging. “Predicting spot fire risk is difficult and we’re only just scratching the surface of what needs to be included for accurate and useful predictions that can help first responders,” Petersen says.
They also plan to do more experiments in conjunction with a consortium of fire researchers that Banerjee set up. Beginning in November, when temperatures in California are cooler and the wildfire risk is lower, members of the new iFirenet consortium plan to collaborate on a large-scale field campaign at the UC Berkeley Research Forests. “We’ll have tonnes of research groups out there, measuring all sorts of parameters for our various projects,” Petersen says. “We’ll be trying to refine our firebrand tracking experiments too, using multiple cameras to track them in 3D, hopefully supplemented with a thermal camera to measure their temperatures.
“My background is in measuring and describing the complex dynamics of particles carried by turbulent flows,” Petersen continues. “I don’t have the same deep expertise studying fires that I do in experimental fluid dynamics, so it’s always a challenge to learn the best practices of a new field and to familiarize yourself with the great research folks have done in the past and are doing now. But that’s what makes studying fluid dynamics so satisfying – it touches so many corners of our society and world, there’s always something new to learn.”