Skip to main content

Patient positioning chair paves the way for upright radiotherapy

Upright radiotherapy

Cancer patients typically lie in a supine position (on their back) during radiotherapy. But for some malignancies, including thoracic, pelvic and head-and-neck tumours, upright body positioning may improve treatment delivery and possibly patient outcome. Upright treatments could increase tumour exposure, reduce radiation dose to adjacent healthy tissue and make breath holds easier for some patients.

To perform upright radiotherapy safely, however, patient immobilization is critical. With this in mind, researchers at the Centre Léon Bérard in France evaluated a patient positioning system currently in commercial development by Leo Cancer Care. The team assessed the immobilization accuracy, set-up time and comfort of the system for 16 patients undergoing radiotherapy for pelvic cancer (prostate, bladder, rectal, endometrial and cervix/uterine tumours).

Findings of the pilot study, reported in Technical Innovations & Patient Support in Radiation Oncology, are encouraging. Initial patient set-up took 4 to 6 min when performed by two radiation therapy technologists working together, and subsequent positionings took between 2 and 5 min. Inter-fraction repositioning was achieved with below 1 mm accuracy on average, and intra-fraction motion over 20 min was within 3 mm for more than 90% of patients. Most patients reported that upright positioning was as good as, and in some cases better than, the supine position that they had to maintain during their standard radiation treatment.

The positioning system (known as “the chair”) is designed to place the patient in appropriate postures according to the cancer type being treated. For prostate and pelvic treatments, patients are perched on the chair, supported by the back of the thigh and a knee rest. Patients are seated vertically for head-and-neck treatments, seated leaning slightly backward for lung and liver radiotherapy, and slightly forward for breast radiotherapy.

The chair itself comprises a seat, a backrest with arm support, a shin rest and a heel stop, all of which adjust to different positions and angles. The chair can rotate at a speed of one rotation per minute, and can simultaneously move in the cranio-caudal direction (vertically in this set-up) by up to 70 cm, allowing the generation of helical movement.

The system incorporates an optical guidance and tracking system, comprising up to five high-resolution cameras. In this study, each patient had their own custom-moulded vacuum cushion and a belt was positioned on the upper part of their abdomen.

For the study, participants undergoing conventional radiotherapy had three additional appointments during their scheduled treatment course to test the upright positioning device. Patients were repositioned at the second and third appointments and the researchers verified the repositioning accuracy using the optical image reference system. They note that image registration was performed using the skin surface, with no skin tattoos or landmarks needed. After being accurately positioned, patients underwent a simulated treatment session with several helical movements lasting 20 min.

Principal investigator Vincent Grégoire and his colleague Sophie Boisbouvier calculated the inter-fraction position shifts after manual registration between reference images and images taken during the repositioning. They report that the chair provided accurate repositioning, with average inter-fraction shifts of –0.5, –0.4 and –0.9 mm in the x-, y– and z-directions, respectively.

The researchers also monitored intra-fraction motion during the chair movements, performing positioning checks every 4 min. After 20 min, the mean intra-fraction shifts were 0.0, 0.2 and 0.0 mm in the x-, y– and z-directions, respectively. Only 10% of patients had inter-fraction shifts exceeding 3 mm and intra-fraction motion exceeding 2 mm. The majority of patients reported that that they were more comfortable in the upright versus the supine position. All patients said that they could breathe comfortably when upright.

The study revealed some modifications required for the chair, including redesigning the belt to improve patient comfort and improvements regarding head positioning. The researchers intend to investigate a new backrest that has been optimally designed to position the head and neck. They also plan to conduct similar assessments of patient immobilization for head-and-neck, lung, breast and upper-abdomen tumours.

Grégoire advises that the team plan to compare upright with supine positioning in terms of internal positioning and motion for the tumour types they have investigated. They also will perform in silico dose distribution comparisons between patients in supine and upright positioning, for both photons and protons. They also hope to estimate potential gains in terms of normal tissue complication probability (NTCP) and tumour cure probability (TCP).

The Centre Léon Bérard is a research partner of Leo Cancer Care, which is developing a range of upright radiotherapy products. In addition to the positioning system, these include a vertical diagnostic CT scanner and a 6 MV horizontal-beam linear accelerator to deliver rotational image-guided intensity-modulated radiotherapy. After French government regulatory authorities authorize the importing of the CT scanner, which does not yet have a CE Mark, the team plans to include vertical CT imaging in future research.

Sterile neutrinos fade as STEREO finds no evidence of oscillations

Data from an experiment in France called STEREO suggest that sterile neutrinos cannot explain why the neutrino flux observed from uranium-235 fission is lower than predicted by theory. Instead, the STEREO team believes that the discrepancy arises from theoretical difficulties in modelling the decay process. The result marks the culmination of an 11-year project to test a hypothesis first advanced by a team of scientists that included two members of the STEREO collaboration.

The 1998 discovery of neutrino oscillations by the Super-Kamiokande detector in Japan was one of the most consequential events in 20th century particle physics. This is because it showed that neutrinos must have a tiny but non-zero mass. As a result, neutrinos propagate through space as oscillating quantum superpositions of electron neutrinos and their heavier cousins – muon neutrinos and tau neutrinos. This explained puzzling experiments done in the 1960s in which physicists observing the Sun had detected significantly fewer neutrinos than predicted. What was happening was that many of these solar neutrinos had oscillated into neutrino flavours that the experiments were not designed to detect.

In 2011, David Lhuillier at CEA and colleagues in France published a paper suggesting that measurements of the neutrino flux from nuclear reactors collectively showed an anomalously low flux of neutrinos from uranium-235 relative to predictions of theoretical models. Moreover, they showed, this anomaly could be explained by the neutrinos oscillating into “sterile” neutrinos that would not interact via the weak force and therefore not be detected. Sterile neutrinos are hypothetical particles that are invoked by some theoretical extensions of the Standard Model of particle physics, so the idea that they might have been glimpsed in neutrino oscillations was tantalizing. In 2016, Lhuillier and colleagues  installed the STEREO detector at the Institut-Laue-Langevin (ILL) research reactor in Grenoble – with STEREO standing for Search for Sterile Reactor Neutrino Oscillations.

Shorter oscillations

“Before, everyone was seeing a mean deficit [in neutrinos],” explains Lhuillier, “The idea was ‘OK, maybe if we go closer, we will be able to see this first or second oscillation’.” The hypothetical oscillation wavelength was not known, he says, but “for sure, the oscillation was already smeared out after 100 m.” The STEREO detector comprises six separate gadolinium-doped 1.8 m3 hydrocarbon-filled scintillators located less than 20 m from the ILL’s almost fully-enriched, 40 cm-diameter uranium-235 nuclear fission reactor, which effectively behaves as a point source of uranium-235 neutrinos.

When a neutrino strikes a hydrogen atom in one of the liquid scintillators, it stimulates inverse beta decay, converting the electron to a positron and the proton to a neutron and knocking both free. The deceleration of the positron generates prompt gamma rays. Subsequently, the neutron is likely to be captured by a gadolinium nucleus, exciting it to a metastable state. When this subsequently decays, it produces a second, larger gamma ray pulse: “That’s the signal we are looking for,” says Lhuillier; “a small pulse from the positron and then, a few microseconds later, a big pulse from the gadolinium.”

If it’s not a sterile neutrino, the problem has to be on the prediction side

David Lhuillier

Using their six sequential detectors, the researchers confirmed that there was indeed a deficit in the neutrino flux relative to that theoretically predicted. However, this deficit appeared constant in all six detectors, so the researchers concluded it was not explained by the neutrinos oscillating in and of some undetectable state. “If it’s not a sterile neutrino, the problem has to be on the prediction side,” says Lhuillier. “It’s very difficult to predict what neutrinos a reactor can emit because you have something like 800 ways to break a uranium nucleus apart, so you need a huge amount of nuclear data.”

The STEREO team describes its findings in a paper in Nature.

“People were looking forward to this, so in that sense it’s a useful paper,” says theoretical physicist André de Gouvêa of Northwestern University in Illinois. “It does confirm the trend of results we have been getting that suggest that this reactor neutrino anomaly is more closely associated with our mis-modelling of how neutrinos are produced in nuclear decays and less with some exciting new physics phenomenon.” He adds, however, that, “The title of the paper [‘STEREO neutrino spectrum of 235U fission rejects sterile neutrino hypothesis’] is a little bit optimistic…Sterile neutrinos in principle are not ruled out as an explanation for this anomaly. Very heavy sterile neutrinos could only be ruled out by cosmological constraints,” he says, adding “There’s some fascinating results [on that] coming out of the KATRIN experiment”.

Theoretical physicist Patrick Huber of Virginia Tech in the US adds “There has been this reactor antineutrino anomaly since 2011, and I think this closes that chapter”. Huber performed one of the neutrino flux calculations showing the tension with experimental observation and adds “We now know that the reason for the discrepancy was flawed input data, and I think this is important going forward when we consider possible applications of neutrino physics such as nuclear security. Their result is a capstone on a community-wide effort to understand why the calculations of 2011 and the data did not agree, and now they do.  That’s the scientific method at work.”

Seeking cosmic particles using a super-pressure balloon, the physics of babies

This episode of the Physics World Weekly podcast features an interview with Angela Olinto, who is principal investigator of the EUSO-SPB2 mission. EUSO stands for Extreme Universe Space Observatory and SPB refers a super pressure balloon, which will soon be hoisting the experiment to an altitude of 33 km. There it will spend about 100 days detecting neutrinos and ultra-high energy cosmic rays.

Olinto, who is based at the University of Chicago, talks about the challenges of operating a particle-detection system floating high above Earth and what the EUSO-SPB2 collaboration hopes to observe.

Also in this episode, Physics World’s Michael Banks talks about his new book, The Secret Science of Baby, which charts the first 1000 days of human development starting at conception. Banks talks about the physics of three key phenomena related to the creation and development of a child – the swimming of sperm; the operation of the placenta; and the development of speech. These are also described in a feature article by Banks that appears in Physics World.

Defect suppression enables continuous-wave deep-UV lasing at room temperature

Researchers have succeeded in making the first room-temperature continuous-wave deep-UV laser diode, using wide-bandgap semiconductor materials. The device could find applications in novel sterilization systems and for more precise laser processing.

Invented in the 1960s, laser diodes today operate at wavelengths ranging from the infrared to blue-violet, with applications such as optical communications devices and Blu-ray discs. Until now, however, they did not work in the deep-UV part of the electromagnetic spectrum.

A team led by Nobel laureate Hiroshi Amano at Nagoya University’s Institute of Materials and Systems for Sustainability (IMaSS) began developing deep-UV laser diodes back in 2017, thanks to a collaboration with Asahi Kasei, the company that made the first 2-inch aluminium nitride substrates. These materials are ideal for growing aluminium gallium nitride (AlGaN) films for UV-light-emitting devices.

The first devices that the team made required input powers of 5.2 W, which was too high for continuous-wave lasing because it heated up the diode too quickly and stopped it from lasing.

In their new work, Amano and colleagues overcame this problem. By improving the design of the device structure, they could suppress the heat generated during operation. In particular, they eliminated the crystal defects that occur at the laser stripe in the AlGaN and deteriorate the paths through which current propagates. They achieved this by tailoring the side walls of the laser stripe such that current was able to efficiently flow to the active region of the laser diode. In this way, they could reduce the required operating power for 274 nm laser diodes to just 1.1 W at room temperature.

“Compared to conventional deep-ultraviolet lasers, our laser is more compact and can achieve higher efficiency,” says Amano. “The device could be employed in practical applications in healthcare, including virus detection. More broadly, it could be used to detect particulates, in gas analysis and high-definition laser processing.”

“Its application to sterilization technology could be ground-breaking,” adds team member Zhang Ziyi. “Unlike the current LED sterilization methods, which are time-inefficient, lasers can disinfect large areas in a short time and over long distances.”

The Nagoya team now plans to improve the operating characteristics of their laser diode for practical use. “We also hope to realize a shorter-wavelength laser diode,” Amano tells Physics World.

The study is detailed in Applied Physics Letters.

Einstein as you’ve never seen him before

In November 1922, just over a century ago, Albert Einstein received the Nobel Prize for Physics. That much is clear. What you might not know is that the prize was officially awarded for the year 1921. The delay was due, in part, to a controversy in the physics community over Einstein’s general theory of relativity and, to a lesser extent, antisemitism among certain physicists – including the Nobel laureate Philipp Lenard, who would later support the Nazis.

Indeed, relativity went unmentioned in the citation from the Royal Swedish Academy of Sciences. It referred instead to Einstein’s “services to theoretical physics and in particular…his discovery of the law of the photoelectric effect”, which was his key contribution to quantum theory. Not that Einstein was concerned: when he gave his Nobel-prize lecture in 1923, he ignored the citation and talked about relativity, not quantum theory.

Published to mark the Nobel prize’s centenary, the book is a lavishly produced and highly appealing collection of photographs, most of which are signed by Einstein

I mention the Nobel-prize episode because it is characteristic of Einstein’s unique work and personality. As he remarked in 1930: “To punish me for my contempt for authority, fate made me an authority myself.” I was therefore delighted to see a handwritten version of this immortal aphorism, with his signature, appearing in solitary splendour on the final page of Einstein: the Man and His Mind by Gary Berger and Michael DiRuggiero.

Published to mark the prize’s centenary, the book is a lavishly produced and highly appealing collection of photographs, most of which are signed by Einstein and others. Arranged chronologically, they are complemented by images of his letters, scientific manuscripts and publications. These are all rounded off with quotations from Einstein and a lively commentary and captions by Berger and DiRuggiero.

The earliest known signed photograph comes from 1896, taken to commemorate Einstein’s graduation from high school in Switzerland. Even as a teenager, his frizzy hair shows hints of its legendary adult unruliness. The book ends with what is believed to be the last photograph he signed, at home in Princeton not long before his death, aged 76, in 1955.

A physician, rather than a physicist, Berger is a retired surgeon who has written 170 medical articles and a dozen books about reproductive medicine. He first became fascinated with Einstein about 20 years ago while working in Chapel Hill, North Carolina, where he started collecting portraits of the great man, some signed by him.

With the encouragement of DiRuggiero, owner of the Manhattan Rare Book Company, Berger then began to acquire documents relating to Einstein’s scientific life. Eventually he invited DiRuggiero to help curate this material. Located in Chapel Hill as the Berger Collection, it is now probably the largest archive of Einstein imagery in private hands.

“Not being a physicist, I could appreciate his pictures, if not the complex mathematics in his writings,” writes Berger in his preface. “The photos gave me the feeling of a personal connection to Albert Einstein – the real, living man – almost as if I knew him.” It is a view shared by DiRuggiero, who describes in the epilogue how he would often find himself staring at a photo of Einstein and wondering what makes his image so powerful.

“Whether he is trying to solve a difficult scientific problem, full of despair contemplating the fate of the world, or lightheartedly playing with children, Einstein seems to be communicating his emotions directly to us,” DiRuggiero writes. “Somehow, in looking at these photographs we feel we know him, that if he walked into the room right now we could talk to him and understand each other. This is extraordinary, considering we are contemplating someone who explored realms of thought inaccessible to nearly all of us.”

Albert Einstein with his children

The book contains a foreword by the physicist Hanoch Gutfreund, academic head of the Albert Einstein Archives at the Hebrew University of Jerusalem, who feels the photos from the second half of Einstein’s life “evoke an image of a friendly non-conformist”. He thanks Berger and DiRuggiero for generously donating all royalties from the book to the extraordinary Jerusalem collection. It deserves to succeed.

It’s not the first book of this kind, though, with a similar, large-format title having been published by Ze’ev Rosenkranz and Barbara Wolff of the Einstein Archives in 2007. Entitled Albert Einstein: the Persistent Illusion of Transience, that book was both wonderfully illustrated and authoritative. Einstein: the Man and His Mind, however, is stronger in its illustrations than in its text.

Astonishingly, it mentions neither Cambridge nor Oxford universities. Cambridge was the scientific home not only of Isaac Newton (an inspiration for the young Einstein) but also of Arthur Eddington, who led the team that provided the astronomical proof of general relativity in 1919 and who was friendly with Einstein. Oxford, meanwhile, was the scientific home of the physicist Frederick Lindemann, who hosted Einstein in the city in 1931, 1932 and 1933, on the final occasion as a refugee from Nazism.

In fact, the book contains no reference to Britain’s vital role in rescuing Einstein from likely assassination by Nazi agents in Belgium in 1933. Nor is there any mention of Abraham Flexner, who founded the Institute for Advanced Study in Princeton, where Einstein settled in 1933. Other omissions include Einstein’s first wife, Mileva Marić (mother of his two children); philosopher Bertrand Russell (who organized the crucial Russell–Einstein Manifesto against nuclear weapons in 1955); and Mahatma Gandhi (whom Einstein the pacifist called “the greatest political genius of our time”).

Another person not mentioned in the book is Wolfgang Amadeus Mozart, whom Einstein (a violinist himself) regarded as his favourite composer. “Mozart’s music is so pure and beautiful,” Einstein once said, “that I see it as a reflection of the inner beauty of the universe.” It is a shame that Mozart is missing because Einstein also sought that beauty – not through music, but through mathematics, reasoning and pure thought.

  • 2022 Damiani 209pp £60.00hb

THETIS phantom detects image distortions to support MR-based treatment planning

The THETIS 3D MR Distortion Phantom helps medical physicists to quantify, as well as track over time, potential distortions that can arise in MR images used for radiotherapy treatment planning. Developed by laser and radiotherapy QA specialist LAP, the THETIS phantom enables the multidisciplinary care team to deploy MRI systems safely in a radiotherapy context and, in so doing, maximize clinical effectiveness through the precision targeting of diseased tissue.

In the treatment suite, MRI delivers clinical upsides along multiple coordinates, not least its superior soft-tissue contrast (versus CT) and the ability to visualize a matrix of functional information – including diffusion processes, blood volume and oxygenation, and localized metabolic activity within tumour sites. Equally compelling is the fact that MRI interrogates the patient using non-ionizing radio waves – a major plus when treating children and in cases where serial imaging scans are needed to track tumour response through multiple radiation fractions.

“Because of those upsides, the adoption of MRI and MR-Linac systems in radiation therapy has grown massively in recent years,” explains Torsten Hartmann, director of product management for healthcare at LAP. “We developed THETIS to enable the radiation oncology team to generate MR images of the highest geometrical accuracy – detecting possible distortions of the MR images reliably and quickly.”

Such potential distortions have their origins in tiny perturbations to the uniformity of the MRI scanner’s magnetic field and the field gradients used to image the patient. By extension, when the MRI scanner’s imaging sequences are not optimized, problems can occur downstream and introduce errors in the patient’s treatment plan. “THETIS makes it easy to determine where distortions are affecting the image and whether the scanner’s magnetic field has changed over a period of time,” Hartmann adds.

Enabling independent QA

In terms of specifics, the THETIS phantom exploits a square grid of embedded silicon markers, each of which provides a strong, localized MR signal (and with 258 signal sources per measurement plate). The phantom – which is aligned to the isocentre or the image centre of the MRI scanner using LAP’s MRI laser systems and its integrated levelling aids – can detect residual image distortions from gradient nonlinearities or main-magnet inhomogeneities to ensure they are within acceptable limits. In this way, the silicon markers help the medical physicist to visualize the loss of geometric fidelity with distance from the magnet isocentre, preventing a potentially inaccurate view of organs located in the outer areas of the MR image.

Torsten Hartmann

“Of course, all MRI system vendors provide methods and algorithms for distortion correction,” notes Hartmann. “While that’s as it should be, a large-field phantom like THETIS underpins those all-important independent QA checks to ensure safe deployment of MRI systems in the radiotherapy setting.” Those QA checks begin with the commissioning of a new MRI scanner and characterization of the machine’s baseline imaging performance versus manufacturer specifications. Equally important, THETIS offers streamlined workflows when it comes to systematic QA of MR image distortions over the lifetime of the MRI scanner – ensuring, for example, the geometric fidelity of MR images after major hardware and software upgrades to the imaging system.

“Clinical teams need a granular view of how such upgrades affect MR image quality,” adds Hartmann. “At the same time, THETIS supports the regular QA monitoring of MR image quality – for example, as part of the monthly or quarterly checks of image distortion and how it changes over time.”

Collaborative innovation

Operationally, the clinical and commercial release of the THETIS phantom is the outcome of an R&D collaboration between the LAP product development team and the MRI technology division at Siemens Healthineers, Germany. The latter is increasingly providing dedicated MRI systems into the radiotherapy clinic and, as such, wants to offer a reliable and affordable image distortion phantom tailor-made for its MRI equipment portfolio.

“We moved quickly from prototyping and evaluation into product development and construction – just six months in all before we entered beta-testing,” explains Hartmann. Geographical proximity certainly helped to streamline the product innovation cycle, with LAP’s Nuremberg manufacturing facility just 20 km or so from the Siemens Healthineers MRI technology hub in Erlangen. “Key to successful delivery was being able to jointly test, iterate and optimize the THETIS phantom with our colleagues in the MRI R&D team at Siemens Healthineers,” Hartmann adds.

Worth noting, though, that the commercial phantom is vendor-agnostic, being optimized on Siemens Healthineers’ MRI systems, but also compatible with a range of open-bore scanners from other manufacturers. In the radiotherapy clinic, meanwhile, the phantom is equally versatile. According to Hartmann, THETIS is compatible with emerging MR-only treatment planning workflows as well as the established standard-of-care in which a fused CT-MRI dataset provides the MR information needed to outline the tumour volume and organs at risk, while the CT is used for dose calculation.

Further reading

Carri K Glide-Hurst et al. 2021 AAPM Task Group 284 report. Magnetic resonance imaging simulation in radiotherapy: considerations for clinical implementation, optimization, and quality assurance Med. Phys. 48 (7) e636

THETIS in brief

The THETIS 3D MR Distortion Phantom is designed for QA of MR images in radiation therapy and diagnostic settings. Key features include:

  • Modularity: expansion stages allow optimal adaptation to different system and workflow requirements.
  • MR-safety: the phantom contains no ferromagnetic components and, as such, is ideally suited to the MRI environment.
  • Easy handling: integrated levelling aids streamline alignment, while intuitive handling of the phantom simplifies 3D examination of the entire MRI space.
  • Figures of merit: 10 plates (maximum expansion stage); three extension modules; 3 T MRI-tested; 258 signal sources per measurement plate.

Telescope with large-aperture metalens images the Moon

Telescope made with a metalens

An important step towards the practical use of optical metasurfaces has been taken by researchers in the US. The team used a common semiconductor manufacturing process to produce a large aperture, flat metalens. Its optical performance was demonstrated by using it as the objective lens in a simple telescope that was aimed at the Moon. The telescope achieved superior resolving power and produced clear images of the surface of the Moon.

Telescopes have been used to peer out into the universe for more than 400 years. In the early 1600s, Galileo Galilei used a telescope to observe the moons of Jupiter and last year the James Webb Space Telescope began taking spectacular images of the cosmos.

The telescopes used today by professional astronomers tend to be large and bulky, which often puts limits on how and where they can be used. The size of these instruments is a result of their large apertures and often-complicated multi-element optical systems that are necessary to eliminate aberrations and to provide the desired high performance.

Engineered nanostructures

Optical metasurfaces offer a potential way to make telescopes and other optical systems smaller and simpler. These are engineered nanostructures that can be thought of as a series of artificial optical antennas (see figure). These antennas can manipulate light, changing, for example, its amplitude, phase, and polarization.

These metasurfaces can be engineered to focus light, thereby creating metalenses that can offer significant advantages over conventional optics. For example, the flat surfaces of metalenses are free of spherical aberrations and metalenses are ultrathin and low in weight when compared to conventional optics.

However, the production of metalenses is still in its infancy. Current fabrication methods are based on scanning systems such as electron-beam (e-beam) lithography and focused ion beam (FIB) techniques. These are slow, expensive, and restrict the size of metalenses to just a few millimetres. This makes large-volume production almost impossible and means that metalenses are currently expensive and too small for the large-aperture applications such as telescopes.

A meta-telescope

Now, researchers at Pennsylvania State University and the NASA-Goddard Space Flight Center have come up with a much better way of making metalenses. Their process can be scaled up for large-scale production and can be used to create metalenses with large aperture sizes that are suitable for telescope applications.

The team used deep-ultraviolet (DUV) lithography, which is a technique commonly used in the semiconductor industry. Their process involved patterning the top of a four-inch silica wafer. Their 80-mm-diameter meta-lens was divided into 16 parts that were combined by exposing the same patterns on different quadrants of the wafer. Pattern stitching and wafer rotation eliminated the need for an expensive single large mask that exposes the entire surface.

Intensity profile

The performance of the metalens was characterized by measuring the intensity profile of focused laser beams over a broad wavelength range spanning 1200–1600 nm. The tests showed that the metalens can tightly focus light close to the diffraction limit over the entire range, despite being designed to operate at 1450 nm. However, diffractive dispersion did vary the focal length throughout the wavelength range – a detrimental effect called chromatic aberration.

The resolving power of the metalens was tested by using it as an objective lens inside a telescope. The team used the telescope to successfully image various features of the Moon’s surface with a minimum resolving feature size of approximately 80 km. This is the best reported resolving power for this type of metalens so far.

Next-generation systems

Lead researcher Xingjie Ni at Pennsylvania State University believes that metasurfaces can be a game changer in optics, because their unprecedented capability for light manipulation makes them powerful candidates for next-generation optical systems. This, he says, is why his team is dedicated to advancing the capabilities of scalable, fabrication-friendly metasurfaces.

“We plan to improve our design techniques to achieve fabrication-imperfection-tolerant nanostructures. This will allow us to use high-volume manufacturing technology such as photolithography to make large scale metalenses working in the visible range and incorporate more complex nanoantenna designs, for example, freeform shaped nanoantennas, to compensate for chromatic aberration,” he tells Physics World.

Din Ping Tsai at the City University of Hong Kong was not involved in the research and he thinks that this work expands the working scenarios of metalenses and will inspire research on metalenses with large apertures. He says that DUV lithography could be used to achieve the high throughput manufacturing of low cost metalenses with reasonable resolution. This would bring the components into commercialization and make them part of our daily life in the coming years.

Tsai believes that the chromatic aberration in the Penn State metalens limits its use to monochromatic applications. He also points out that the design of large-area broadband achromatic meta-lens is still a big challenge and is in strong demand. In addition, he believes that a large mask is the preferred way to make metalenses in order to avoid stitching errors and to simplify the fabrication process.

The research is described in ACS Nano Letters.

MRI guidance reduces side effects of prostate cancer radiotherapy

Stereotactic body radiotherapy (SBRT) is an established treatment for prostate cancer. It involves delivering large daily doses of precisely targeted radiation in five or fewer fractions, traditionally using either planar X-ray or cone-beam CT images to guide the radiation delivery.

The prostate is a highly mobile target and it’s essential to account for its motion during irradiation to maximize treatment effectiveness. This is typically achieved by creating a planning target volume (PTV) that includes a margin around the prostate to ensure adequate target dosing. However, the high-dose regions of the PTV often overlap portions of the bladder, rectum and other nearby structures, which can cause side effects such as urinary, bowel and sexual dysfunction.

The recent introduction of MRI-guided linacs could help minimize the risk of such toxicities. MRI-linacs offer high soft-tissue contrast and the ability to track intra-fraction prostate motion directly (rather than relying on fiducial markers) and control the beam in real time during treatment. These advantages should enable the use of significantly smaller margins around the prostate. To date, however, the theoretical advantages of MRI-guided radiotherapy for prostate SBRT have not been demonstrated in a randomized clinical trial.

The MIRAGE (MRI-guided stereotactic body radiotherapy for prostate cancer) trial aims to address this shortfall and determine whether MRI-guided radiotherapy offers an evident benefit for patients. The phase III randomized clinical trial, led by Amar Kishan and Michael Steinberg at the University of California, Los Angeles (UCLA), enrolled men receiving SBRT for localized prostate cancer. Between May 2020 and October 2021, the trial randomized 156 patients to receive SBRT with either CT guidance (77 patients) or MRI guidance using the MRIdian system (79 patients).

Patients were treated with 40 Gy in five fractions, using planning margins of 4 mm in the CT arm and 2 mm in the MRI arm. The researchers note that this 2 mm margin is narrower than used in any previous large study. They hoped to show that this aggressive margin reduction could reduce toxic effects following SBRT.

“MRI guidance offers several advantages over standard CT guidance, most notably the ability to dramatically reduce planning margins, providing more focused treatment with less injury to nearby normal tissues and organs,” says Kishan in a press statement. “MRI technology is more costly than CT, both in terms of upfront equipment expenses and longer treatment times, which is one reason our study set out to determine if MRI-guided technology offers tangible benefits for patients.”

Improved outcomes

The results of the trial, described in JAMA Oncology, revealed that MRI guidance led to fewer toxicities and better quality-of-life, as judged by both patients and the doctors treating them.

In 154 patients available for follow-up, the incidence of acute grade 2 or greater genitourinary (GU) toxic effects was significantly lower following MRI- than CT-guided SBRT: 24.4% in the MRI group versus 43.4% in the CT group. Patients in the MRI group also had fewer acute grade 2 or greater gastrointestinal toxic effects: 0.0% versus 10.5%, respectively. In a multivariate analysis accounting for all candidate variables, the MRI-guided arm remained associated with a 60% reduction in odds of grade 2 or greater GU toxicity.

After 100 patients reached 90 or more days post-treatment, the researchers conducted an interim analysis. At this time, the incidence of acute grade 2 or greater GU toxic effects was significantly reduced in men receiving MRI-guided SBRT compared with those receiving CT-guided SBRT (24 of 51 versus 11 of 49). They re-estimated the required sample size as 154 patients and, as 156 patients had already been treated, closed the trial for further accrual.

“This is the first large scale SBRT trial to use a dose of 40 Gy to the PTV, which we felt was an appropriate dose given the anticipated risk level of the cohort we would be treating. Because dose is associated closely with toxicity, we knew beforehand that the estimates of toxicity we used to power the trial might be underestimates,” Kishan tells Physics World. “Thus, we stipulated an interim analysis should occur after 100 patients were eligible for analysis in order for us to formally re-evaluate the power considerations for the trial.”

One unique aspect of the study was its inclusion of patient-reported outcomes. Significantly fewer patients receiving MRI-guided SBRT experienced large increases in urinary symptoms. Similarly, far more patients experienced a clinically notable decrease in bowel-related quality-of-life with CT guidance.

The researchers note that longer term follow-up is necessary to determine whether these benefits persist, whether differences in late urinary or bowel toxic effects occur and to evaluate differences in sexual outcomes. They plan to continue to monitor toxicity outcomes and perform an analysis of 2-year patient-reported outcomes.

“MRI-guided radiation has apparent theoretical benefits in this treatment scenario, and it was important to conduct a rigorous comparison,” says Steinberg. “Given the significance of the outcomes realized, we’ve evolved our prostate cancer treatment approach at UCLA to preferentially utilize MRI-guided SBRT.”

The surprising physics of babies: how we’re improving our understanding of human reproduction

Diverse group of ten babies playing

Becoming a parent or carer for the first time is a joyous, if fairly loud, occasion. When a baby enters the world covered in bodily fluids, they inflate their lungs to take a breath and let out an ear-piercing cry. It’s the first sign for bleary-eyed expectant parents that their life will never quite be the same – they will soon get to grips with constant feeds, dirty nappies and, of course, a lack of sleep. Part of the challenge for new parents is dealing with the many changes that lie ahead, not only in their own life but that of the newborn; as babies rapidly develop in the coming days, months and years.

Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
    • Chapters
    • descriptions off, selected
    • subtitles off, selected

      “The first thousand days” is a common term used by paediatricians to describe the period from conception to the child’s second birthday – a time during which so many critical developments occur; right from the very moment of conception as the embryo, and then fetus, undergo rapid daily transformations. Some nine months later at birth, the infant’s reliance on the placenta to sustain itself in utero comes to an end. The baby must get to grips with breathing on its own and feeding on the breast or from the bottle while also adapting to their new environment. Months later, the development takes on other dimensions as the infant rolls, crawls, stands on unsteady legs, and then ultimately walks. If that wasn’t enough, there is also the not-so-small matter of communication, by learning a language.

      Given how crucial the first thousand days are; many aspects concerning conception, pregnancy and babyhood remain woefully understudied

      It is easy to take any of these individual milestones for granted – and many parents do, for no fault of their own. After all, infants are seemingly built to take on these challenges. But considering how crucial these two-and-a-half years are, many aspects concerning conception, pregnancy and babyhood remain woefully understudied. Pregnancy, for example, has commonly been seen as something to be endured rather than investigated. Research on the properties and working of the placenta, uterus and cervix, lags decades behind that of other organs such as the heart, lungs and brain. One reason for this is the ethical perspective of studying pregnant women and newborn babies; not to mention the fact that research into healthcare for women has long been marginalized, and often overlooks key differences between men and women. Studies must be carefully designed, and various ethical procedures and guidelines need to be adhered too. That will remain; but what is different today is finally seeing these topics as worthy of investigation in the first place – a move that has also been helped by advances in imaging and theoretical techniques.

      While some may think that it is only biology and neuroscience that can shine a light on conception, pregnancy and babyhood, physics too has the necessary tools to provide a fresh perspective into many of these issues. Physics plays a key role in everything from how sperm are able to navigate the complex fluids of the female reproductive system to reach the egg (see “Conception – life begins at low Reynolds number”); to the forces that are involved to support the development of the embryo; and how the placenta is able to control the diffusion of a wide range of solutes to and from the fetus (see “Pregnancy and the placenta; the tree of life”). Physical processes are involved in the manner that contractions can coordinate and travel across the uterus to expel a baby; how a newborn can effortlessly extract milk from the breast; what acoustic properties of babies’ cries make them so hard to ignore; and how toddlers are able to learn grammar so effectively (see “Babyhood – it’s good to talk”).

      Today, research into these matters from a physical-science perspective is not only throwing up surprises about what the human body is capable of, but is also highlighting potential treatments – from new methods to monitor fetal movements, to innovative ways to help premature babies to breath. Such endeavours are also deepening our appreciation of the processes that life has put in place to propagate itself. And there remains much more to discover.

      Conception – life begins at low Reynolds number

      “[Sperm] is an animalcule which mostly…swims with its head or front part in my direction. The tail, which, when swimming, it lashes like a snakelike movement, like eels in water.” So wrote the Dutch businessman and scientist Antonie van Leeuwenhoek to the Royal Society in the 1670s concerning his observations of sperm. Using his custom-built microscopes, which were more powerful than anything made before, van Leeuwenhoek was the first to peer into the microscopic realm. His devices, which were about the size of a hand, let him image objects with micrometre resolution, clearly resolving many different types of “animalcule” that reside on or in the body, including sperm.

      Human egg and sperm

      Despite van Leeuwenhoek’s acute observations, it took hundreds of years to get any firm idea about how sperm could propel through the complex fluids that exist within the female reproductive tract. The first clues came in the late 1880s from the Irish physicist Osborne Reynolds who worked at Owens College in England (now the University of Manchester). During that time, Reynolds conducted a series of fluid-dynamics experiments, and from them obtained a relationship between the inertia that a body in a liquid can provide and the viscosity of the medium – the Reynold’s number. Roughly speaking, a large object in a liquid such as water would have a large Reynolds number, which means inertial forces created by the object are dominant. But for a microscopic body, such as sperm, it would be the viscous forces of the liquid that would have the most influence.

      The physics explaining this strange world where viscous forces dominate was worked out by several physicists in the 1950s, including Geoffrey Taylor from the University of Cambridge. Conducting experiments using glycerine, a high-viscosity medium, he showed that at a low Reynolds number, the physics of a swimming microorganism could be explained by “oblique motion”. If you take a thin cylinder, such as a straw, and let it fall upright in a high-viscosity fluid like syrup, it will do so vertically – as you may expect. If you put the straw on its side, it will still drop vertically, but half as fast as the upright case due to increased drag. However, when you put the straw diagonally and let it fall, it does not move vertically downwards but falls in a diagonal direction – what is known as oblique motion.

      This occurs because the drag along the length of the body is lower than in the perpendicular direction – meaning that the straw wants to move along its length faster than it does perpendicular, so it slips horizontally as well as falling vertically. In the early 1950s, Taylor and Geoff Hancock from the University of Manchester, UK, carried out detailed calculations about how a sperm could travel. They showed that as the sperm whips its tail, it creates oblique movements at different sections, producing viscous propulsion.

      Today, researchers are building ever complex models for how sperm swim. These models are not just for theoretical insights, but also have applications in assisted reproduction techniques. Mathematician David Smith from the University of Birmingham, UK – who has worked on biological fluid dynamics for over two decades – and colleagues have developed a sperm-analysis technique. Dubbed Flagella Analysis and Sperm Tracking (FAST), it can image and analyse the tail of a sperm in exquisite detail. From the images, it uses mathematical models to calculate how much force the body is applying to the fluid. The package also calculates the sperm’s swimming efficiency – how far it moves using a certain amount of energy.

      The team began clinical trials with FAST in 2018, and if the technique is successful, it could help couples assess what type of assisted reproduction technique may work for them. The simulations may show, for example, that “intrauterine insemination” – in which sperm are washed and then injected into the uterus, bypassing the cervical canal – could be just as successful over several cycles as carrying out more expensive and invasive IVF procedures. Alternatively, their technique could be used to help to analyse the impact of male contraception. “This project is about harnessing 21st-century technologies to address male fertility problems,” says Smith.

      Pregnancy and the placenta – the tree of life

      Consisting of a network of thick purple vessels and resembling a flat cake, the placenta is the life-giving alien within. An organ unique to pregnancy, a healthy placenta at full term is around 22 centimetres in diameter, 2.5 centimetres thick, and with a mass of about 0.6 kg. It is a direct link between the mother and fetus, providing the fetus with oxygen and nutrients, and allowing it to send back waste products, such as carbon dioxide and urea, a major component of urine.

      From just a collection of cells in early pregnancy, the placenta begins to form a basic structure once it intertwines with the lining of the uterus. This eventually leads to a network of fetal vessels that branch out to form villous trees – a bit like Japanese bonsais – that are bathed in maternal blood in the “intervillous space”. The placenta could be described as fifty connected bonsai trees upside down at the top of a fish tank that is full of blood, thanks to the pumping of several maternal arteries at the bottom.

      The placenta

      Estimated to contain around 550 km of fetal blood vessels – similar in length to the Grand Canyon – the placenta’s total surface area for gas exchange is around 13 m2. Part of the difficulty studying the placenta is due to these varying scales. The other issue is knowing how this huge network of fetal vessels, which are each about 200 μm across, ultimately affects the performance of a centimetre-scale organ.

      The exchange of gases between maternal and fetal blood is via diffusion through the villous tree tissue – with the fetal vessels closest to the villous tissue thought to be doing the exchange. By combining experimental data with mathematical modelling of the intricate geometry of the fetal blood vessels, for the past decade mathematician Igor Chernyavsky from the University of Manchester and colleagues have been studying the transport of gases and other nutrients in the placenta.

      The team found that despite the incredibly complex topology of the fetal vessels, there is a key dimensionless number that can explain the transport of different nutrients in the placenta. Determining the chemical state of a mixture is a complex problem – the only “reference” state being equilibrium, when all the reactions balance each other and end up in a stable composition.

      In the 1920s, physical chemist Gerhard Damköhler attempted to work out a relationship for the rate of chemical reactions or diffusion in the presence of a flow. In this non-equilibrium scenario, he came up with a single number – the Damköhler number – that can be used to compare the time for the “chemistry to happen” with the flow rate in the same region.

      The Damköhler number is useful when it comes to the placenta because the organ is diffusing solutes – such as oxygen, glucose and urea – in the presence of both a fetal and maternal blood flow. Here, the Damköhler number is defined as the ratio between the amount of diffusion against the rate of blood flow. For a Damköhler number larger than one, diffusion dominates and occurs faster than the blood flow rate, known as “flow limited”. For a number less than one, the flow rate is greater than the diffusion rate, known as “diffusion limited”. Chernyavsky and colleagues found that, despite the various complex arrangements of fetal capillaries in the terminal villus, the movement of different gases in and out of the fetal capillaries could be described by the Damköhler number – which he called the “unifying principle” in the placenta.

      The researchers found, for example, that carbon monoxide and glucose in the placenta are diffusion limited, while carbon dioxide and urea are more flow limited. Carbon monoxide is thought to be efficiently exchanged by the placenta, which is why maternal smoking and air pollution can be dangerous for the baby. Intriguingly, oxygen is close to being both flow and diffusion limited, suggesting a design that is perhaps optimized for the gas; which makes sense given it is so critical to life.

      It is unknown why there is such a wide range of Damköhler numbers, but one possible explanation is that the placenta must be robust, given its many different roles, which include both nourishing and protecting the baby from harm. Given the difficulty of experimentally studying the placenta both in utero and when it is delivered in the third stage of labour, there is still a lot we don’t know about this ethereal organ.

      Babyhood – it’s good to talk

      Toddler deciding what to say

      It is difficult to express how hard, in principle, it is for infants to pick up their language – but they seem remarkably good at doing so. When a child is two to three years old, its language becomes sophisticated incredibly quickly, with toddlers being able to construct complex – and grammatically correct – sentences. This development is so rapid that it is difficult to study, and is far from being fully understood. Indeed, how infants learn language is hotly contested, with many competing theories among linguists.

      Almost all human languages can be described with what is known as a context-free grammar – a set of (recursive) rules that generates a tree-like structure. The three main aspects of a context-free grammar are “non-terminal” symbols, “terminal” symbols, and “production rules”. In a language, non-terminal symbols are aspects like noun phrases or verb phrases (i.e. parts of the sentence that can be broken down into smaller parts). Terminal symbols are produced when all the operations have been carried out, such as the individual words themselves. Finally, there are the hidden production rules that determine where the terminal symbols should be placed, to produce a sentence that makes sense.

      A diagram showing how language is learnted

      A sentence in a context-free grammar language can be visualized as a tree, with the branches being the “non-terminal” objects that the infant does not hear when learning language – such as verb phrases, and so on. The leaves of the tree, meanwhile, are the terminal symbols, or the actual words that are heard. For example, in the sentence “The bear walked into the cave”, “the bear” and “walked into the cave” can be split off to form a noun phrase (NP) and a verb phrase (VP), respectively. Those two parts can then be split further until the final result is individual words including determiners (Det) and prepositional phrases (PP) (see figure). When infants listen to people talking in fully formed sentences (that are, hopefully, grammatically correct), they are only exposed to the leaves of the tree-like network (the words and location in a sentence). But somehow, they also have to extract the rules of the language from the mixture of words they are hearing.

      In 2019, Eric De Giuli from Ryerson University in Canada modelled this tree-like structure using the tools of statistical physics (Phys. Rev. Letts. 122 128301). As infants listen, they continually adjust the weights of the branches of possibilities as they hear language. Eventually, branches that produce nonsensical sentences acquire smaller weights – because they are never heard – as compared to information-rich branches that are given larger weights. By continuously performing this ritual of listening, the infant “prunes” the tree over time to discard random-word arrangements, while retaining those with meaningful structure. This pruning process reduces both the number of branches near the tree’s surface and those deeper down.

      The fascinating aspect of this idea from a physical point of view is that when the weights are equal, language is random – which can be compared to how heat affects particles in thermodynamics. But once weights are added to the branches and adjusted to produce specific grammatical sentences, the “temperature” begins to decrease. De Giuli ran his model for 25,000 possible distinct “languages” (which included computer languages), and found universal behaviour when it came to “decreasing the temperature”. At a certain point, there is a sharp drop in what is analogous to thermodynamic entropy, or disorder, when the language goes from a body of random arrangements to one that has high-information content. Think of a bubbling pot of jumbled words that is taken off the stove to cool, until words and phrases begin to “crystallize” into a specific structure or grammar.

      This abrupt switch is also akin to a phase transition in statistical mechanics – at a certain point, the language switches from a random jumble of words to a highly structured communication system that is rich in information, containing sentences with complex structures and meanings. De Giuli thinks that this model (which he stresses is only a model and not a definitive conclusion for how infants learn language) could explain why at a certain stage of development a child learns incredibly quickly to construct grammatical sentences. There comes a point when they have listened to enough for it all to make sense to them. Language, it seems, is just child’s play.

      Alex Müller: Nobel-prize-winning condensed-matter physicist dies aged 95

      The Swiss condensed-matter physicist and Nobel laureate Alex Müller died on 9 January at the age of 95. Müller shot to fame in 1986 when he and his colleague Georg Bednorz discovered a material with a superconducting transition temperature far above those of so-called conventional metal superconductors. The work earned the pair the Nobel Prize for Physics the following year.

      Born on 20 April 1927 in Basel, Müller received a diploma in physics and mathematics from the Swiss Federal Institute of Technology (ETH) in Zürich in 1952. After graduating, he stayed on at ETH Zurich to study paramagnetic resonance of solid-state materials, obtaining his PhD in 1958.

      Following a stint as head of the magnetic-resonance group at the Battelle Memorial Institute in Geneva, he took up a position at the University of Zurich in 1962. A year later he also joined IBM Research Zurich, heading the physics department from 1971. In 1985 he left IBM but remained at the University of Zurich before retiring in 1994.

      During his life, Müller published several groundbreaking papers in the field of magnetic resonance and phase transitions in ferroelectrics. At IBM and Zurich he also began working on oxide materials, especially perovskites, which would later be useful in his work in superconductivity in the 1980s.

      Fighting for acceptance

      It has been known for more than a century that when certain metals are cooled to extremely low temperatures, they become superconductors, conducting electrical current without resistance. But as materials had to be cooled to just a few degrees above absolute zero for this phenomenon to occur, research in to the field began to stagnate.

      That all changed, however, in 1986, when Müller and Bednorz discovered that a material composed of copper oxide with lantanum and barium became superconducting at around 35 K. This was about 50% higher than the previous highest value of 23 K, which had been achieved more than a decade earlier in niobium-germanium (Nb3Ge).

      Given that oxide materials are ceramics and are known to conduct poorly at higher temperatures, their finding did not convince everybody, with some researchers believing the materials were not exhibiting superconductivity.

      “It was disappointing seeing their reaction,” Bednorz later told Physics World in 1988. “We had the impression that they didn’t believe what we had found out and after that experience we were convinced that we would have to fight for one, two or even more years to get our results accepted in the scientific community.”

      The results, however, were soon confirmed by other researchers, with Müller and Bednorz being credited with having discovered an entirely new class of superconductors. The buzz surrounding these so-called “cuprate superconductors” reached fever pitch at the American Physical Society’s March Meeting in New York in 1987. The meeting was later dubbed “the Woodstock of physics” due to the fervent nature of the talks and discussions that went on late into the night.

      In 1987 Bednorz and Müller shared the Nobel Prize for Physics “for their important break-through in the discovery of superconductivity in ceramic materials”. The finding sparked decades of research into similar cuprate materials where superconducting transition temperatures over 100 K were later discovered, which Müller followed “with great interest and commitment”.

      Copyright © 2025 by IOP Publishing Ltd and individual contributors