Skip to main content

A review of AAPM task groups 218 and 219

Want to learn more on this subject?

Pending CAMPEP CE credit.

The American Association of Physicists in Medicine (AAPM) has recently released new recommendations that dive deep into patient-specific quality assurance (QA), with task group (TG) 219 published in 2021 covering independent calculation-based dose/monitor-unit verification for intensity modulated radiation therapy (IMRT), which stems from TG 218 published in 2018 on tolerance limits and methodologies for IMRT measurement-based verification QA.

Benefits of attending:

  • Understand the limitations for each of the point, planar and volumetric secondary check processes and algorithms (TG 219).
  • Understand the recommendations from TG 218 with regard to IMRT QA and how pass/fail rates correspond to the delivery of the plans.

Want to learn more on this subject?

Vimal Desai, PhD, clinical instructor, Thomas Jefferson University Hospitals. Vimal’s introduction to the field of medical physics was through patient-specific QA projects utilizing log file analysis. This lead to his expanded interest in radiation dosimetry, quality assurance, and radiation measurements/simulations. His doctoral thesis was focused on linear accelerators and investigated calibrating radiation detectors closer to clinical delivery conditions. This has led to various collaborators investigating the efficacy of plan complexity metrics and maintains interests and ongoing projects in various topics related to patient-specific QA.

Carlos Bohorquez, MS, DABR, is the product manager for RadCalc at LifeLine Software Inc, part of the LAP Group. An experienced board-certified clinical physicist with a proven history of working in the clinic and medical device industry, Carlos’s passion for clinical quality assurance is demonstrated in the research and development of RadCalc into the future.

Personalized brain stimulation could treat untreatable depression

Depression is a common disorder, affecting an estimated 5% of adults worldwide, and a leading cause of disability. Although therapy and medications are effective in most patients, there’s a substantial minority who remain resistant to all available treatments.

“It was these patients who really drove us to do this research,” explains Katherine Scangos from the University of California, San Francisco (UCSF). Scangos and colleagues are developing a personalized treatment for severe depression based on deep-brain stimulation (DBS), in which implanted electrodes deliver electrical impulses to targeted structures in the brain.

The researchers have now demonstrated the feasibility of their new approach in a 36-year-old woman with severe, treatment-resistant depression, reporting their findings in Nature Medicine. The patient, Sarah, had previously tried all possible treatment options, including multiple antidepressant combinations and electroconvulsive therapy, with no success at lifting her depression, which restricted her daily life and led to unremitting suicidal impulses.

“When I first received the stimulation, I felt the most intensely joyous sensation and my depression was a distant nightmare for a moment,” said Sarah, speaking at a press briefing. “Months later, when the researchers implanted the chronic device and turned it on for the first time, my life took an immediate upward turn. Now a year into therapy, the device has kept my depression at bay, allowing me to return to my best self and rebuild a life worth living.”

Therapy on demand

The idea of personalized DBS arose some seven years ago in the laboratory of UCSF neurosurgeon Edward Chang, when he observed that stimulating brain areas in epilepsy patients also alleviated emotional symptoms such as anxiety and depression.

While DBS has shown some promise for treating depression, previous clinical trials have revealed highly variable responses between subjects. These inconsistent findings may be due to the use of open-loop DBS, which delivers constant electrical stimulation to a single brain structure. Recent work, however, showed that the effects of DBS are dependent on the emotional state of the patient. In addition, different neural circuits underlie different subsets of depression symptoms and may vary between different people.

“So we set out to see whether we could develop a personalized DBS strategy,” Scangos explains.

Sarah’s treatment was a two-stage process. To identify her unique depression circuits, the researchers first placed 10 temporary electrodes into various regions of her brain. For 10 days, they continuously recorded neural activity while Sarah rated her symptom severity. This brain mapping approach enabled the team to identify a personalized brain activity biomarker: the finding that high brain activity in the amygdala was correlated with most severe depressive symptoms.

The researchers also used the temporary electrodes to deliver small stimulation pulses to each brain region. They identified an area of the brain, the ventral capsule/ventral striatum (VC/VS), in which electrical stimulation consistently eliminated feelings of depression.

Armed with this information, the researchers then implanted a commercial DBS device (the NeuroPace RNS System) into Sarah’s brain. One of the device’s electrode leads was placed in the amygdala and the other in the VC/VS. They found that a 6 s stimulation at 1 mA was clinically effective and that Sarah could not feel the electrical stimulation at this level.

In addition to identifying a personalized neural biomarker, the other key breakthrough in this work was the use of the biomarker to perform closed-loop therapy, in which stimulation is only delivered when needed. To achieve this, the team programmed the implanted device to continually monitor Sarah’s amygdala for abnormal activity. When this activity was detected (representing a state of severe depression), it automatically triggered a 6 s stimulation pulse to the VC/VS.

The proof-of-concept study proved a notable success. “When we turned this treatment on, our patient’s depression symptoms dissolved and in a remarkably short time, she went into remission,” said Scangos.

Over two months, the device delivered an average of 468 stimulation pulses throughout the day, with few stimulations at night. The team capped the number of stimulations at 300 per day to minimize sleep disturbance from evening therapy.

Ongoing research

Looking at the longer-term implications of this new treatment, Scangos notes that it is too early to tell how long the device will need to remain in a patient, or whether it’s possible that it may somehow help the brain rewire its circuity. However, one advantage of the closed-loop approach is that it does not require continuous stimulation, providing a battery life of over 10 years and enabling the implanted device to provide long-term stimulation if needed.

The researchers also point out that treatments such as cognitive behavioural therapy are almost impossible when a patient is suffering severe depression. If the implanted device can treat the most extreme symptoms, then patients may be more able to employ such therapies. Sarah notes that once the DBS treatment had begun: “I was finally able to use the therapy skills I’d learned and never been able to apply.”

The researchers emphasize that this work is at a very early stage. They cannot determine whether the particular neural biomarker and depression circuit identified in this single-participant study would be present in all individuals. They have now enrolled two other patients in the trial and hope to add nine more.

“These results provide hope that much needed personalized biomarker-based treatment for psychiatric disorders is possible,” says Scangos.

Carbon fibres have directional electrical properties

The electrical properties of a carbon fibre are very different when measured across its width or along its length, according to a new study by Satoshi Matsuo and Nancy Sottos at the University of Illinois at Urbana-Champaign in the US. Using a technique designed to probe the electrical resistivity of 2D materials, the duo has shown for the first time that fibres are significantly less conductive in the transverse direction.

When carbon fibres are woven into interlocking sheets, the composite materials they produce can display a unique variety of electrical properties, with applications including electromagnetic shielding; sensing for structural damage in buildings; and protection against lightning strikes. To tailor these composites for specific uses, it is important to have accurate models of their electrical behaviours. However, the complex structure of these materials makes this extremely challenging.

Carbon fibres measure just a few microns in diameter and comprise bundles of smaller carbon filaments, which are themselves composed of crumpled sheets of carbon atoms. Within these sheets, strong covalent bonds between the atoms are aligned parallel to the axis of the fibre. On longer length scales, filaments are bonded together by far weaker van der Waal’s forces.

Difficult to characterize

Such hierarchical structures are very difficult to characterize reliably so Matsuo and Sottos turned to the “van der Pauw” method. This technique is commonly used to measure the resistivities of 2D materials and involves placing two separate pairs of electrodes around the perimeter of a sample. In their experiment, the duo connected the electrodes to 2D slices of carbon fibre, which they cut using a focused ion beam.

Across the diameter of the fibre, the duo measured an electrical resistivity roughly six times greater than that along its length. This means that the material is a poorer electrical conductor in the transverse direction than it is in the longitudinal direction – something that can have important implications for how electrical currents move through a carbon fibre component.

Their results are a step forward in efforts to better understand the electrical properties of carbon fibres – but researchers still have much to learn about the characteristics of far more complex composite materials.

Matsuo and Sottos are now making further progress towards this goal, by measuring the electrical contact resistance between two separate carbon fibres. This value is directly connected with the transverse resistivities of the fibres; as well as the area of contact between them, and the angle at which they cross each other. In the future, the researchers also hope to assess how electrical properties vary under different environmental conditions, such as temperature. In addition, they hope to carry out similar experiments on fibres made from other conductive materials, such as polymers or metals.

The research is described in Journal of Applied Physics.

High-resolution independent VMAT/IMRT patient QA: Clinical implementation and results

Want to learn more on this subject?

This presentation has been accredited 1 MPCEC hour by CAMPEP and submitted for approval by EBAMP as a CPD event for Medical Physicists at EQF Level 7.

Learn about the clinical implementation and routine use of the MatriXX Resolution detector array. Discover the latest features in the myQA software that allow for accurate measurement of volumetric modulated plans, as well as static IMRT, or any dose delivery method used by your clinic. Integration of the wireless Gantry Sensor+ with the device and angular correction will also be discussed.

The guest speaker Dr Raj Mitra from Ochsner Health Systems, Louisiana, USA, will walk participants through the initial setup and calibration of the device, testing and validation and finally, present clinical test results for various IMRT/VMAT cases. The presentation will conclude with expert answers to your live questions.

Benefits of attending your webinar include:

  • Learn about product improvements with the new MatriXX device
  • Discover new features of myQA software
  • Get introduced to the new wireless Gantry Angle Sensor +
  • Gain knowledge about the Look Up Table (LUT) integration for different beam energies (standards and FFF beams)
  • Learn about LUT and Angular correction validation
  • Understand calibration for different energy modalities
  • Discover clinical use and benefits of MatriXX Resolution

Want to learn more on this subject?

Dr Raj Mitra is the lead medical physicist for Ochsner Health System, New Orleans, Louisiana, USA. He has more than 25 years of experience in clinical radiation oncology physics and is board certified by the American Board of Radiology and American Board of Medical Physics. In addition to his work at Ochsner Health System, Raj Mitra has also had numerous research articles published.

Distillation method strengthens quantum entanglement in a single pair of photons

Quantum entanglement is a valuable resource, enabling spy-proof communications and allowing quantum algorithms to be faster than classical ones. But like other quantum phenomena, entanglement is also extremely delicate and sensitive to environmental noise. Because many quantum communication protocols require high levels of entanglement to operate properly, preserving that entanglement is crucial.

There is a solution, but it comes at a hefty price. By sacrificing some poorly entangled quantum objects, physicists can create a better-entangled pair out of the objects that remain – a little like reducing a weak broth into a hearty soup by boiling off the excess water. This method of increasing entanglement in quantum objects is known as entanglement distillation and was first described theoretically in the late 1990s. Since then, it has been demonstrated in all kinds of quantum systems, from superconducting circuits to photons. Now, however, researchers at the Institute of Quantum Optics and Quantum Information (IQOQI) at Vienna have demonstrated entanglement distillation using only a single pair of photons. By using the various quantum properties embedded in this photon pair, these researchers can generate and distribute entanglement more quickly, more easily and with greater protection than ever before.

Boil down your qubits

Entanglement allows pairs of quantum objects to communicate with each other, regardless of how separated they are in space. This property makes entangled pairs of photons extremely useful, as it can allow two parties to whisper secrets to one another, knowing that no one else can eavesdrop without disturbing their delicate quantum system. However, entanglement can be degraded by the environment over time, making it harder and harder for the entangled photons to “hear” each other clearly.

Cartoon showing a distillation flask partly full of weakly entangled particles with a "droplet" of a highly-entangled particle being distilled out

Entanglement distillation reverses this noise, reviving the entanglement and giving the pair of photons a new life via a protocol involving quantum logic operations known as controlled-not (CNOT) gates. Traditionally, this protocol is pretty wasteful: each distillation step sacrifices a good pair of photons, and even worse, there is no fail-proof way to guarantee the operation will succeed. The IQOQI researchers, however, found a better way. “You can perform these controlled-not gates not just between two photons, but between two properties of the same photons,” explains Sebastian Ecker, a PhD student at the IQOQI and first author of a report published in Physical Review Letters.

Leveraging degrees of freedom

Photons have many uniquely quantum properties, such as their polarization state, energy level and spatial mode. Collectively, physicists refer to these properties as “degrees of freedom” and all of them have been used independently to demonstrate entanglement. However, the IQOQI study is the first experiment to demonstrate entanglement distillation with different degrees of freedom.

In their experiment, the researchers generate entangled photons pairs using a nonlinear crystal, then send each photon in the pair to a different optical table. Each table holds a labyrinth of optical devices that perform the entanglement distillation step and interpret the results. At the core of this labyrinth is an unassuming optical device called a polarizing beamsplitter. This small glass cube transforms the state of the photons, changing the quantum state of the photon’s polarization only if certain conditions are met with the photon’s energy-time domain. That action exactly describes a CNOT logic gate, one of the basic logical building blocks of quantum computing. After this distillation process is complete, the researchers measure the properties of the photon pair and determine how much entanglement was recovered.

The researchers also verified that their distillation process is robust by intentionally inserting noise into the environment. Because that noise is very carefully controlled, they can quantify how well their new procedure works in noisy environments, showing that their method for entanglement distillation is faster than traditional two-pair methods by a factor of 100 million. “In our experience, these degrees of freedom are robust enough to revive entanglement after passing through long optical fibres or free-space links,” Ecker says.

A perfectly entangled world

Because polarization and energy-time are both frequently used in other aspects of quantum communication, the researchers are confident that their scheme will soon find many other applications. After considering how this method might improve on previously extracted quantum keys, their sights are now set even higher. “Wouldn’t it be nice if you could use the high dimensional entanglement to make your qubit entanglement noiseless? This would be really cool,” Ecker says.

Life beyond the Nobel: Andrea Ghez eyes up new research directions

Physicists around the world are gearing up for tomorrow’s big reveal of who has won the 2021 Nobel Prize for Physics. Part of the prize’s appeal is that no-one – apart from the members of the Nobel Committee for Physics – currently knows who this year’s winners will be. Even the Royal Swedish Academy of Sciences only grants final approval on the very morning the prize is announced, which sounds seat-of-the-pants, but that’s the way it is.

Once the winners are declared, however, their lives will change forever. To find out what impact the prize can have, I caught up with US astrophysicist Andrea Ghez, who shared one half of last year’s award with Reinhard Genzel for their work discovering a huge black hole lurking in the middle of the Milky Way (Roger Penrose bagged the other half for his theoretical studies of black holes and general relativity).

I have to work harder to maintain a balance between being a more public figure and carrying out research.

Angela Ghez

Ghez, 56, is still as active as ever, but admits her working life has certainly changed over the last year. “I’m receiving a lot more requests from all over the place,” she says. “So I have to work harder to maintain a balance between being a more public figure and carrying out research, which continues to be my first priority.”

She was, for example, co-author of a paper earlier this year describing plans for an infrared spectrograph to be used on the upcoming Thirty Meter Telescope. Ghez also recently gave a keynote address to students graduating from the International Centre for Theoretical Physics in Trieste, Italy. She even revealed she still gives introductory lectures to undergraduate students to “shape the next generation’s ideas about who can be a scientist”, knowing how vital it is for Nobel laureates to act as role models.

However, Ghez has had plenty of opportunity to do things she wouldn’t have had the chance to tackle without a Nobel under her belt. “The most rewarding experience that I would not otherwise have had was speaking to the Hawaii county council at a meeting when they gave me a lovely signed certificate acknowledging my Nobel-prize work that was carried out in Hawaii,” she reveals.

“I was deeply honoured and thrilled to express my gratitude for having had the opportunity to work in Hawaii at Keck Observatory, the largest telescope in the world.”

But are there are any new research directions Ghez wants to go into that would have not been possible before the prize?

“Yes!” she insists. “I’m really excited to take on a more ambitious and riskier research agenda that would not have been possible otherwise to explore how gravity works near supermassive black holes and how these exotic, but poorly understood, objects regulate the formation and evolution of galaxies.”

Oxford Instruments logo

Physics World‘s Nobel prize coverage is supported by Oxford Instruments Nanoscience, a leading supplier of research tools for the development of quantum technologies, advanced materials and nanoscale devices. Visit nanoscience.oxinst.com to find out more.

Green jobs for physics graduates: decarbonizing energy sources

Ann Davies, chief operations officer, Lightsource BP

Taking advantage of new technologies will be essential if the world is to achieve its net-zero goals. Renewable energy sources are perhaps the most obvious of these, and the transition to them is already under way.

“I don’t think there’s ever been a more exciting time to join the energy sector,” says Ann Davies, who is chief operations officer at Lightsource BP, a firm that has been developing solar projects since 2010, and now employs nearly 600 people. She took up her current job after studying physics at the University of Oxford, UK, and gaining experience in several engineering roles after graduating.

Ann Davies

Among the main reasons to choose a career in renewables, Davies cites the growth that the sector is experiencing due to the increased focus on climate change and demand for clean energy. “Along with the cost of solar and wind dropping significantly, that makes renewables a really sound economic proposition, which means that investment is at a record high,” she explains. “From a graduate perspective, that means there will be more and more opportunities as your career develops.”

Davies’ work involves leading teams of scientists and engineers working on the planning, implementation and operation of solar projects around the world. She notes that there are lots of problems for scientists to solve, from loading up the grid with renewables to managing intermittency issues.

When recruiting new scientists and engineers, Davies emphasizes the importance of technical grounding. “Physics teaches you how to break down complex problems into simple parts, and we need those skillsets,” she says. In the hiring process, she also values interpersonal skills. “It’s not one person who is going to solve this – it takes a team.”

Davies advises graduates who want to join these efforts to read widely and to connect with people in the industry. Since there are so many different areas of sustainability that physicists can contribute to, she believes it is important to find out what makes you tick as an individual, and how you want to apply your skills. For herself, she finds working in the energy sector rewarding, because it is so universal. “Energy touches everyone,” she says, “so being part of providing it in a responsible way really gets me going when I get up in the morning.”

Hari Chohan, nuclear radiation analyst, UK Atomic Energy Authority

While solar and wind energy might be the first clean-energy technologies that spring to mind, they are not the only low-carbon options. Another relevant area in which many physicists work is nuclear energy, both fission and fusion.

Hari Chohan

“I consider all nuclear energy to be green energy,” says Hari Chohan, who is a nuclear fusion radiation analyst at the UK Atomic Energy Authority (UKAEA). “My granddad was an engineer and worked on nuclear projects,” he adds, “so I’ve always been pro-nuclear energy.” Chohan developed a stronger interest in nuclear fusion while writing an article about it for one module of his physics degree at Imperial College London.

As a result of these two influences, Chohan decided to do a Master’s degree in physics and nuclear technology at the University of Birmingham, UK. He then did a nine-month internship at Fusion for Energy in Barcelona, a body that co-ordinates the EU’s contribution to ITER, which is the largest experimental fusion reactor under construction in the world.

During this internship, Chohan’s main area of work was neutronics, also known as neutron transport, which is the study of the motion of neutrons and how they interact with materials. This is not only important for developing appropriate shielding but also estimating the lifetimes of components. “Neutrons are produced in both fission and fusion,” he explains, “but they have a lot more energy in the case of fusion, so we need to ensure the materials and components we design can withstand them.”

Chohan continues to work on fusion neutronics in his current role at UKAEA, which involves programming, running simulations, analysing results and writing them up in reports. “As well as working on major international projects, we have a couple of fusion test machines at UKAEA, the Mega Amp Spherical Tokamak Upgrade (MAST-U) and EUROfusion’s Joint European Torus, and we’re designing a prototype fusion energy plant, the Spherical Tokamak for Energy Production, at the moment, so there’s a lot of active research going on,” he says.

Chohan believes accurate public communication about nuclear energy is essential. “In fusion, unlike fission, there will be no high-level nuclear waste generated by the reaction itself, but there will still be a lower level of nuclear waste generated by the interaction of the neutrons with the reactor components,” he says. “All technologies have advantages and disadvantages. To move away from fossil fuels, we will need a mixture of different energy sources, but we can’t do it without nuclear. And the prospect of nuclear fusion is truly exciting.”

Rhiann Canavan, scientific project manager, Crossfield Fusion

While fusion is a long-term goal with huge clean-energy potential once it’s achieved, we don’t have to wait until then to get something positive from the research going into it. There are many by-products along the way that can be useful more immediately, as Rhiann Canavan, who works at UK-based start-up Crossfield Fusion, has discovered.

Rhiann Canavan

While studying at the University of Birmingham, UK, Canavan was inspired by a nuclear-physics professor to go into nuclear power. “I knew I wanted to go into a job where I was delivering something useful,” she says, “and nuclear power seemed like it had the potential to change the world.”

After graduating with an MSci in physics, Canavan studied for a PhD in experimental nuclear physics with the University of Surrey, UK, and the National Physical Laboratory. Her project focused on understanding fast neutron-induced fission reactions, which can be done to make nuclear waste decay faster.

After finishing her PhD, Canavan did a summer placement with Crossfield Fusion, which she found through the South East Physics network (SEPnet) – an association of nine university physics departments that supports students in south-east England. “When I read the mission of the company, I really wanted to get involved,” she says. “The end goal is fusion, but there are also short- and mid-term goals, such as using the technology to produce radioisotopes for medical scans.”

After completing her internship, Canavan joined the company in a permanent role as scientific project manager. Crossfield Fusion is a start-up with just five employees, so her tasks vary widely. She began by helping to build the research reactor, and she now plans and carries out experiments with it. “Some days are lab days when everything has to be spot-cleaned because we’re installing components,” she says. “Other days I’m analysing data, computer programming or group brainstorming what to try next.”

Canavan says she feels a lot of ownership of the work, having seen the progress from the very early days. “Your heart is really in it and you want it to succeed,” she says. Since deuterium – a key ingredient in fusion reactions – is highly abundant and can be extracted from any type of water, Canavan also points out how much more environmentally friendly it would be to fuel a fusion reactor than to burn fossil fuels. “Instead of digging up coal,” she says, “we could just use sea water.”

‘CatGym’ algorithm predicts better catalysts

Designing efficient new catalysts is no easy task. In catalysts that contain more than one element, for example, researchers not only need to take into account all the possible elemental combinations, they must also add a number of other variables, such as particle size, shape and surface structure, as well as the degree of alloying or phase segregation. This ultimately leads to an overwhelmingly large number of potential candidates.

To address this challenge, scientists employ computational design techniques that focus on screening material components and alloy composition to optimize a catalyst’s activity for a given reaction and so reduce the overall number of prospective structures that would need to be tested and then developed. Such techniques require combinatorial approaches coupled with theory calculations, which can both be time-consuming and complex.

The best surface atom configurations

A team led by Zachary Ulissi of Carnegie Mellon University has now taken a different approach by developing a deep reinforcement learning (DRL) programme, dubbed CatGym, that iteratively changes the positions of atoms on the surface of a catalyst to find the best configurations from a given starting configuration.

The researchers showcased their technique by predicting the surface reconstruction pathways of a ternary Ni3Pd3Au2(111) alloy catalyst. Their results show that the DRL programme can not only be used to explore more diverse surface compositions than conventional methods, but that it can also generate new pathways based on how energetically favourable they are.

The team also demonstrated that the kinetic pathways that lead to a stable surface composition (with a low minimum energy surface composition) and the associated transition state predicted by the DRL programme agree well with the minimum energy path predicted by traditional “nudged elastic band” calculations done “by hand”.

A lot of human input

There has been much excitement in recent years for when it comes to using machine learning methods to accelerate catalysis simulations, says Ulissi. Such an approach reduces the computational cost of each step in the simulation, but the downside is that it requires a lot of human input to run the calculation. This is because scientists need to define what structure is used from the outset, what mechanisms should be investigated and if there is a better path to take to go from reaction A to reaction B. All these questions require a trained expert many days or weeks to answer.

“The new work is very exciting for us because it proposes using DRL methods to tackle these strategic questions,” Ulissi tells Physics World. “With our system, we can let the computer autonomously explore a number of different possible pathways.”

Representation and action space

DRL requires three things, he explains. “The first is a representation – that is, how do we show an atomic structure of a catalyst to the computer in a way that it understands? In our system we use a common representation from the literature. The second is an “action space”: what are we going to let the computer do? In our approach, it can move an atom, find an energy minimum, find a transition state or run a short dynamic simulation. Finally, how do we decide what action to take next? In our case, we tried many DRL schemes to answer this question.”

“One aspect that made this project really interesting was that the final goal was not clear,” explains Ulissi. “In a video game, for example, it is obvious what you want you want the DRL to do – maximize the final score. We thus spent a lot of time defining and identifying the goal the DRL would work well with.”

Double-checking results

Ulissi says he previously studied catalyst surface reconstruction mechanisms by hand, which can be very tedious. “A tool to automate and accelerate this process not only allows us to ask much more interesting questions, it can also be used to double-check the results obtained by human experts.”

The researchers, who report their work in Machine Learning: Science and Technology, are now using the method they have developed to predict how stable hypothetical catalyst surfaces are. “We also hope to apply our approach to better understand the mechanisms at play on these surfaces,” adds Ulissi. “Doing this will help us think creatively about what might happen to a catalyst during real-world reactions.”

It will not all be plain sailing, however, he admits. One of the major limitations of the current technique is that, like most DRL applications, it requires a lot of data input and training episodes. “Accurate simulations are extremely demanding computationally,” he explains, “and the simulations we performed in our work are fast but rather coarse approximations.” The researchers are trying to solve this problem by also using machine learning models to make them not only faster but also more accurate.

Life beyond the Nobel: Russell Hulse’s path from binary pulsar discoverer to plasma physicist

In the summer of 1974, Russell Hulse was down in the weeds of collecting data for his PhD thesis when he noticed something odd. Together with his supervisor Joseph Taylor, Hulse was using the famed 305 m spherical reflector dish at the Arecibo Observatory in Puerto Rico to search for pulsars: compact and highly magnetized stars that broadcast bursts of radio waves across the galaxy as they rotate. Although the first pulsar had been spotted only six years earlier by another student-and-supervisor pair, Jocelyn Bell Burnell and Antony Hewish, these unusual stars were already a hot topic in astrophysics. Hulse hoped to make his mark by identifying more of them, but one of the 40 pulsars in his data was causing problems. Denoted PSR 19 13 + 16 in his notebook, it defied all his efforts to calculate its period of rotation.

Arecibo Observatory

At the time, Hulse’s reaction was not “Eureka!” but instead, as he later recalled, “a rather annoyed ‘Nuts – what’s wrong now?’” Determined to get to the bottom of whatever technical glitch was causing the problem, Hulse focused his remaining observing time on this perplexing pulsar. By mid-September, he had his answer, and it was a doozy: PSR 19 13 + 16 was one half of a binary pair of stars, and its hard-to-calculate period was fluctuating under the gravitational influence of its companion.

For the next several months, Hulse worked as a self-described “pulsar data acquisition system” while Taylor performed orbit analysis calculations to test predictions of how this binary pulsar system should behave. Their findings were a stunning confirmation of Einstein’s general theory of relativity and, not incidentally, the first evidence of gravitational radiation. With these gold-plated results under his belt, the announcement that the 1974 Nobel Prize for Physics would honour the first pulsar discovery must have seemed, to Hulse, like a foretaste of the accolades that awaited him in his astronomy career.

By 1975, however, Hulse had a dilemma. Although he obtained a postdoctoral appointment at the National Radio Astronomy Observatory (NRAO) in Charlottesville, Virginia, soon after finishing his PhD, it wasn’t a permanent job. “While I still enjoyed doing pulsar radio astronomy, from the moment I arrived at NRAO I was increasingly preoccupied with the lack of long-term career prospects in astronomy,” he later recalled. “While I had some confidence that I could find another position of some sort after NRAO, it was not at all clear to me when, where, and how I would be able to settle down with some reasonable expectation of stability in my career.” His concerns were sharpened by his personal circumstances. With his then-girlfriend (later wife) Jeanne Kuhlman doing graduate work in physics at the University of Pennsylvania and soon to embark on her own career, Hulse decided that “the potential for such repeated major dislocations in my personal life was more than I could quite tolerate”.

Photo of Russell Hulse

When his NRAO appointment ended in 1977, therefore, Hulse left astronomy to take up a post at the Princeton Plasma Physics Laboratory (PPPL), located conveniently close to Jeanne in Philadelphia. He was able to switch fields in part because he did his PhD in physics rather than astronomy – a choice that reflected both his broad scientific interests and his desire to hedge his career bets – and in part because pulsar-hunting was already, in the mid-1970s, a highly computerized task. (In a lecture I attended in the early 2000s, Hulse joked that he spent so much time programming at Arecibo that he took to adding up his chequebook in hexadecimal.) At PPPL, his first task was to create new computer codes to model the behaviour of impurities in high-temperature plasmas. By 1993, when he and Taylor won the Nobel Prize for their binary pulsar discovery, he had carved out a niche as a developer and maintainer of codes for modelling thermonuclear fusion.

At this point, it is instructive to reflect on the Nobel committee’s differing treatment of Hulse and Bell Burnell. While Hulse and Taylor shared the 1993 prize equally, the pulsar half of the 1974 prize went solely to Bell Burnell’s supervisor Hewish (another astronomer, Martin Ryle, received the other half for unrelated work). While sexism surely played a role in this snub, Bell Burnell has long attributed it to her lowly student status; in 1974, she says, the Nobel committee had not yet realized that PhD students could and did make significant intellectual contributions. In this light, it is worth noting that in his Nobel lecture, Hulse paid tribute to Taylor for treating him and other students as “colleagues rather than subordinates”. Hence, the divergent Nobel fates of Hulse and Bell Burnell may be down to differing supervisory styles, as well as discrimination and shifting attitudes towards PhD students.

It is hard not to see Hulse’s career path as an indictment of how early-career research is funded and organized

In his official 1993 Nobel biography, Hulse comes across as sanguine about his career. “My interest in science has never been so much a matter of pursuing a career per se, but rather an expression of my personal fascination with knowing ‘how the world works,’” he wrote. Like other scientists who received the Nobel at a relatively young age (he was 42), Hulse found that the prize brought invitations to serve on advisory boards in academia, government and industry. In 2004, he accepted a visiting professorship at the University of Texas at Dallas with a focus on science education, and he remains on the faculty there, though the scope of his activities has been limited since 2012, when he was diagnosed with Parkinson’s disease. His wife Jeanne, for her part, retired in 2020 from a long career in the pharmaceutical industry.

Despite these achievements, though, it is hard not to see Hulse’s career path as an indictment of how early-career research is funded and organized. Forty-four years after a lack of job security forced this future Nobel laureate out of radio astronomy, academic career prospects remain highly uncertain. Researchers who are disinclined to change jobs (and sometimes countries or continents) every few years as postdocs still find their path to permanent positions blocked. And the same “two-body problem” that confronted Hulse and Kuhlman nearly half a century ago continues to force difficult choices onto scientific couples today, with consequences that affect all genders but disproportionately damage the careers of women. The Nobel committee’s attitude towards PhD students may well have changed since the mid-1970s. Too many other things have not.

Oxford Instruments Logo 2021

Physics World‘s Nobel prize coverage is supported by Oxford Instruments Nanoscience, a leading supplier of research tools for the development of quantum technologies, advanced materials and nanoscale devices. Visit nanoscience.oxinst.com to find out more.

Green jobs for physics graduates: policy and behaviour change

Eunice Lo, researcher, University of Bristol, UK

In order to respond effectively to climate change, it’s important to understand its fundamental causes and effects. Eunice Lo is particularly interested in how climate change will affect extreme weather events, with a focus on heatwaves and their impact on human health.

Lo first became interested in this area while studying physics and astronomy at Durham University, UK, where her final-year project looked at how to calibrate telescopes to correct for atmospheric effects on the radiation they detect. “Climate change is also about radiation transfer through components of the atmosphere,” she explains, “with outgoing longwave radiation being trapped by greenhouse gases.”

Eunice Lo

To pursue this further, Lo did a PhD in atmosphere, oceans and climate at the University of Reading, UK. She found that her physics background helped her to hit the ground running, as she already had some understanding of atmospheric physics and had honed her computer programming skills during her undergraduate degree. Lo also did some Master’s courses in meteorology alongside the first year of her PhD. “That helped me transition from a pure physics background to applying my theoretical knowledge to the environment,” she recalls.

Lo now uses the programming languages Python and R to create climate models and study how global warming will affect the frequency and intensity of heatwaves. Since heatwaves can cause illnesses and even deaths, she then translates those possible future scenarios into projections of human health outcomes.

Crucially, Lo’s research is often included in reports such as the UK Climate Change Risk Assessment Report, which is published every five years, and which government officials use to make decisions about national mitigation and adaptation. She is also a contributing author to one chapter of the latest report published by the United Nations Intergovernmental Panel on Climate Change, and hopes this will prompt new policies on reducing emissions.

Mark Crouch, carbon management team lead, Mott MacDonald

Building a sustainable future is not just about action at a government level – businesses also need to make changes. Mark Crouch, who works at Mott MacDonald, an international engineering consultancy headquartered in the UK, is optimistic that companies are waking up to their responsibilities.

“The conversations with clients have become a lot more mature than they were when I first started in this field around 2010,” says Crouch. “Businesses no longer see sustainability as something they just have to consider to make themselves look good. They understand climate change as a real business risk.”

Mark Crouch

Crouch went into environmental engineering after studying physics with astrophysics at the University of Leeds, UK. While working on flood modelling and designing flood alleviation schemes, it struck home that global warming was having major impacts. “With something like flood risk, you can’t just keep building bigger dams,” he explains. “That’s what motivated me to get into mitigation and addressing the root causes.”

Crouch did a Master’s in sustainable energy at Imperial College London, which he found prepared him well for consultancy, and he now heads up the 100-strong and growing global carbon management practice at Mott MacDonald. His team in the UK works closely with bodies such as the Environment Agency and National Grid to understand the carbon impacts of major infrastructure projects such as HS2 on a full-life-cycle basis, and advises clients on how to reduce them. They also look at how emerging technologies fit in.

The full-life-cycle analysis is important because infrastructure often has a big carbon footprint not just when it is up and running but also when it is being built. Crouch notes that if cement manufacturing were a country, it would be the third biggest emitter of greenhouse gases, behind the US and China. “There’s a huge need for people to tackle that through developing new materials and approaches,” he says. “That’s another area of opportunity for people with science backgrounds, including physics.”

For graduates interested in sustainability, Crouch’s advice is to take all the opportunities you can for doing industrial placements, and to be proactive about building your network through webinars and industry forums. He also recommends reading widely across different subjects, as climate change is a highly multidisciplinary challenge.

The climate emergency requires immediate and sustained action, and the sector needs graduates who are passionate about making a difference

Mark Crouch

“It’s a really booming market,” Crouch says. “The climate emergency requires immediate and sustained action, and the sector needs graduates who are passionate about making a difference.” Among the skills that physicists can contribute, he notes the importance of understanding how to work with numbers and uncertainties, as well as the ability to think big. “Topics like cosmology and astrophysics really teach you to think on a different timescale, and outside of the day-to-day.”

Rosemary Pickering, senior sustainable business analyst, Farfetch

In addition to seeking advice from external consultants, many companies are putting together sustainability strategies and employing people in-house to drive progress towards their goals. Rosemary Pickering has combined her environmental values with her interest in fashion by working as a senior sustainable business analyst at Farfetch, a luxury fashion marketplace, which sells everything from handbags to activewear by high-end designers.

Rosemary Pickering

While studying courses in environmental and atmospheric physics as part of her physics degree, Pickering decided that she wanted to do a sustainability-related Master’s, so she enrolled in an MSc course on environmental technology and policy at Imperial College London. “That gave me a completely different perspective on sustainability and the environment,” she says. “Within that, I did a project looking at sustainable fashion.”

In her current role at Farfetch, Pickering focuses on the firm’s sustainability strategy, which has three main “pillars”. One is to encourage customers to switch to purchasing more sustainable options by providing information on the environmental and social impacts of different products. Another is to work towards net-zero carbon emissions by 2030, by minimizing the distance products have to travel, choosing the greenest transport methods and shipping each product in the smallest box possible. Finally, Farfetch has launched several circular services, whereby customers can resell products they are no longer using, or have items fixed or updated, instead of buying new ones.

Pickering’s role covers all three of the pillars, and involves reporting on trade performance and looking at which more sustainable products, such as items made of organic cotton, are selling. She also monitors which circular services customers are using, and Farfetch’s overall progress towards its sustainability goals.

“One of the projects I’ve really enjoyed working on was our Conscious Customer Report, which we published externally,” she says. “A lot of people in the [fashion] industry talk about changing trends and patterns, and you can actually see it coming through in the data, which is really exciting.”

Having a physics background has helped equip Pickering with the skills she needs for her job, as she has to be confident with calculating the statistics she is reporting, and also needs to use coding skills in Python for her day-to-day tasks.

For graduates looking for green jobs, Pickering emphasizes that there are more opportunities than you might think. “Many small businesses have a marketing or operations team where 50% of the role is focused on sustainability, because that’s embedded in the company’s mission,” she explains. “There are a lot more jobs out there than just the ones that have ‘sustainability’ in their titles.”

Copyright © 2026 by IOP Publishing Ltd and individual contributors