Skip to main content

CT-based biomarkers quantify radiation-induced lung damage

Radiation therapy of lung cancer can cause toxic side-effects to healthy lung tissue, known as radiation-induced lung damage (RILD). RILD can significantly impact patient quality-of-life post-treatment, but with historically poor prognoses of lung cancer, long-term RILD reporting is piecemeal, with only a few subjective and unreliable scoring systems available. Recently, however, improvements in lung cancer survival have triggered interest in more rigorous RILD assessment.

To establish a standardized methodology for RILD scoring, medical physicists and engineers from University College London (UCL) teamed up with clinical oncologists and thoracic radiologists from Guy’s and St. Thomas’s NHS Trust and Royal Brompton Hospital.

The multidisciplinary team analysed CT scans of cancer patients before and after treatment, and developed software to semi-automate quantification of 12 identified biomarkers of RILD. This is the first time that RILD has been quantified using a broad spectrum of radiological findings (Int. J. Radiat. Oncol. Biol. Phys. 10 1016) .

“As we start to get long-term survival, looking at the lasting damage to the patients becomes more important,” says first author Catarina Veiga, research associate at UCL. “We wanted to develop methods to quantify this in a very objective manner.”

Spot the difference

In the first stage of the study, clinicians carefully examined CT scans of 27 non-small cell lung cancer patients who had taken part in a phase I/II clinical trial of isotoxic chemoradiation. By aligning the baseline images with those taken 12 months post-treatment, they identified common changes.

“All patients at 12 months had some type of lung damage, but the degree of change could be different,” explains Veiga. “Some just have a bit of lung volume loss, while others have extensive parenchymal damage, pleural effusion and anatomical distortions.”

Previous to this study, the main RILD marker recognised was parenchymal changes – scarring or inflammation that can be easily seen on CT imaging as “bright” regions within the lung. Here, the clinicians identified three main categories of RILD – parenchymal, pleural and anatomical.

In the pleura – the area between lungs and chest – inflammatory changes were observed. And the clinicians noted that as treatment scars shrunk, they distorted the lung’s anatomy, so reducing lung volume. From these observations, the team settled on 12 measurable biomarkers of RILD.

Automating for objectivity

At this point, the engineers stepped in to quantify the biomarkers and establish an image analysis pipeline in a modular and semi-automated fashion within MATLAB. The pipeline was validated by comparing results to the original visual observations.

Of the 12 biomarkers, 10 showed a significant change, and statistical analysis highlighted that biomarkers were independent of one another. This made the authors confident that they have a system that can accurately report RILD, and they plan to make it fully automated and freely available to the community in the future.

“By automating all of this, we can speedily process the results from trials and clinical practice, and have evidence of how the different treatments we are using may or may not impact the long-term outcome,” says Veiga.

Technical challenges

The scientists note that there are some technical limitations to their methodology. Imaging protocols are of key importance, with inconsistency easily introduced between scanning time points. For example, variations in a patient’s breathing during imaging could influence biomarker measurement.

“For this particular clinical trial, the majority of patients had consistent imaging, but in routine clinical practice it is likely that this will become more of a challenge,” says Veiga.

Another technical consideration that the team is trying to address is the use of manual segmentation methods. Although most stages of the pipeline are automated, images were sometimes manually segmented to calculate the biomarkers. This is a laborious process that could introduce subjectivity to the results, and so Veiga and colleagues are currently running a pilot study that fully automates segmentation.

Improving life quality for survivors

“Once we are able to fully automate the system then we want to start looking at how the damage evolves over time,” notes Veiga, who is planning to examine the three, six and 24 month images that they have from the same clinical trial. “We are also interested in looking at the relationship between the identified damage, radiation dose delivered and clinical outcomes.”

Ultimately, Veiga hopes that the multidisciplinary team has created a tool that can improve clinical practice and be used in future trials to find the critical radiation dosage that clears tumour cells with minimum toxicity.

Organic ferroelectrics finally stick in the memory

Inorganic ferroelectrics have promised to change the face of semiconductor electronics for almost a century, but high processing costs have so far limited development. Now, researchers at Southeast University in Nanjing, China, have paved the way for progress by fabricating the first metal-free perovskite crystals. They present a set of materials that can achieve the performance of inorganic ferroelectrics but with the versatility, low-cost and low-toxicity inherent in organics.

To induce the directional switching of polarization characteristic of ferroelectricity, a material must contain a spontaneous dipole that can respond to an electric field. In other words, the centres of positive and negative charge within a crystal must be different. For metal-free perovskites, this should theoretically happen when a highly symmetric non-ferroelectric state is ‘frozen’ into a state with polar symmetry.

The MDABCO molecule (bottom-left) and the perovskite structure with the central “A” site highlighted. Credit: Yu-Meng You and Xiong Ren-Gen

From database to device

With this in mind, Ren-Gen Xiong and Yu-Meng You instructed their students to scour the hundreds of thousands of entries in the Cambridge Structural Database for molecules of suitable size and symmetry. Such candidates could then be incorporated into the traditionally metallic “A” site of the perovskite structure, yielding an all-organic perovskite ferroelectric.

The result of their efforts is the discovery of 23 metal-free perovskites including MDABCO-NH4I3 (MDABCO is N-methyl-N’-diazabicyclo[2.2.2]octonium). This particular crystal displayed a spontaneous polarization of 22 microcoulombs per centimetre square, close to that of the state-of-the-art perovskite ferroelectric, BaTiO3 (BTO). In addition, crystals can be formed readily at room temperature, avoiding the excessive heat (>1000 oC) required to make inorganic ferroelectrics. This will lower fabrication costs and open the door for more delicate device applications such as flexible devices, soft robotics and biomedical devices.

The MDABCO molecule is crucial to the large spontaneous polarization that the researchers observed. At high temperatures, excessive thermal energy leaves the MDABCO molecule in a state of free rotation within the crystal. Here, the average centres of positive and negative charge at the molecule site are the same and ferroelectricity is forbidden. However, when cooled below the phase transition temperature of 448 K, the MDABCO molecule becomes locked in place revealing a significant dipole with eight possible polarization directions.

Beyond binary

Ferroelectric random access memory (Fe-RAM) works on the principle that individual cells are charged to states “0” and “1”, represented by different polarization directions of the active material.  As ferroelectric crystals tend to have two polarization states, we obtain the well-known binary system. The eight possible polarization directions in MDABCO-NH4I3 then, will pique the interest of those looking to make next-generation memory devices.

“In principle, eight polarization directions could be used to make an octonary device with eight different logic states”, explains Yu-Meng You. “This is a potential strategy for increasing the density of future RAM devices”. While You expresses concern over increased architectural complexity in such a device, the potential for cramming eight bits into a single cell could add to the commercial prospects of this set of materials.

Prospects for perovskites

But the opportunities for advancement don’t stop at memory applications. “We have demonstrated a new system of perovskites with compositional flexibility, adjustable functionalization and low toxicity. We expect the metal-free perovskite system will attract great attention in near future”.

Full details are reported in Science.

 

Reality of ‘pristine’ cloud forest revealed

The cloud forest of Ecuador today harbours more biodiversity than almost anywhere else in the world. But it has a long record of human exploitation, as a new study has detailed.

In the mid-19th century, Western visitors described the cloud forest as a region that “has remained unpeopled by the human race” harbouring “a dense forest, impenetrable save by trails”. The study shows, however, that this was more than 30 years after human alteration of the forest resumed; the explorers weren’t seeing unspoilt forest at all.

These results could feed into the modern conservation of this cloud forest and other at-risk habitats. Historical records of ecosystems before modern damage, often used as targets for conservation efforts, may instead show habitats already altered by centuries of human impact.

The study also revealed that although indigenous people in the past carried out more deforestation than we have today, the forest was able to recover in only around 130 years.

Nicholas Loughlin of the Open University, UK, and colleagues from Ecuador, the Netherlands and Spain analysed pollen records preserved in lake sediment for the past 700 years to discover the plant species history of the cloud forest in unprecedented detail. The researchers distinguished four distinct plant communities over time, indicating changing human land use.

Loughlin and the team took cores from the bottom of Lake Huila, more than 2.5 km above sea level. They examined over 2 metres of sediment, providing a radiocarbon-dated record back to around the year 1300. Pollen arrived at the site of the lake from the local area and was preserved with the sediment, giving an excellent window into the species making up past communities.

The first phase of plant life recorded in the sediment contains maize, cultivated by the local Quijos people, and other open space-loving plants, indicating significant deforestation by humans.

The year 1588, according to the radiocarbon date, saw an abrupt change in the pollen record, with maize disappearing and huge increases in grasses and forest plants. Agriculture had ceased and the forest had begun to encroach back. The lake sediment from this time also contains large amounts of burnt charcoal; this point coincides with the height of a rebellion by indigenous peoples against the Spanish conquistadors, who arrived around 1560.

Following the rebellion, the population declined catastrophically. The cloud forest continued to re-establish through this period and into the next. After 1718, grass pollen is almost absent from the core and the pollen record is dominated by the plants you’d expect to find in a cloud forest. This period is the one most like the forest before humans arrived some 40,000 years ago, demonstrating the ecosystem’s ability to recover to a near-natural state following the end of human impacts. Following 1819, the flora in the sediment changes again, with less forest pollen, more grasses and the appearance of fungi that live on dung; this shows the beginning of cattle farming and the resumption of deforestation.

Loughlin and colleagues published their findings in Nature Ecology and Evolution.

A quantum leap for industry

In the July edition of Physics World Stories, Andrew Glester looks at the latest developments in technologies based on quantum mechanics. While quantum computing often steals the headlines, there is a whole world of other quantum-based devices in the pipeline for a range of applications.

Glester speaks first with Raphael Clifford and Ashley Montanaro at the University of Bristol about quantum computing. They are interested in the prospects of achieving “quantum supremacy” – the point at which quantum computers can outperform classical computers at specific tasks.

Next, Glester hands the reigns over to Physics World’s Margaret Harris who recently attended the 2018 Photonics West conference in San Francisco. At that event, Harris caught up with Anke Lohmann, the director of ESP Central Ltd, which supports the transfer technology form academic settings to the marketplace. Lohmann gives her opinion on the quantum innovations most likely to have the most significant impacts in the coming years, among them is quantum key distribution for secure communication.

Finally, Glester heads to the University of Birmingham, the site of one of the UK Quantum Technology Hubs. He is given a tour of the lab by Kai Bongs who explains how the goal is to transform scientific concepts in practical applications that are economically viable. The focus at the Birmingham hub is on developing sensors and metrology techniques. Targeted applications include gravity-mapping beneath the Earth’s surface and highly precise optical clocks.

Co-ordinated action required to boost ‘open-science’ initiatives

Making research papers, data and methodologies freely accessible for anyone to read and use has gained significant ground in recent years, but several challenges remain to widespread implementation. That’s the conclusion of a new report into “open science” by the National Academies of Sciences, Engineering, and Medicine (NASEM), which calls on universities and publishers to issue new processes to improve how science can be freely accessed.

Open science aims to make research papers and data, as well as methodologies such as code or algorithms, freely available. NASEM’s report, Open Science by Design: Realizing a Vision for 21st Century Research, identifies several recent initiatives that have advanced open science – such as citizen-science projects – and highlights the increase in research funded by organizations that require the outputs to be open for everyone to read and use.

Automated search and analysis of open publications and data can make the research process more efficient and more effective

Alexa McCray

While the 190-page report notes that the use of open-access publishing and open data is the “norm” in some areas of physics, notably high-energy physics and astronomy, other areas lag behind. The report points out several issues stopping open science becoming more widespread, such as researchers refusing to share their data, as well as the high number of journals that are only available via subscription.

A critical point

To overcome such challenges, the report calls on universities to develop training progammes that focus on open science for their researchers. It also recommends that professional societies change their journal publication strategies from a subscription-based model to approaches that support open science.

“We are at a critical point where new information technology tools and services hold the potential to revolutionize scientific practice,” says Alexa McCray of Harvard Medical School, who chaired the 10-strong committee that produced the report. “Automated search and analysis of open publications and data can make the research process more efficient and more effective, working best within an open science ecosystem that spans the institutional, national, and disciplinary boundaries.”

The report has already gained some support, notably from the Texas Republican Lamar Smith, who heads the US House of Representatives science, space, and technology committee. He sees the report as confirming a rule that, if implemented, would require the Environmental Protection Agency to base its decisions only on information that is publicly available to scientists and the general public.

Yet critics of the move argue that such a rule prevents the use of confidential or proprietary information – particularly in medical research – in scientific decisions. “Environmental regulations, which are ultimately funded by taxpayers, should be based on open and replicable studies,” Smith noted in a statement.

Build it and they will have fun

Educational physics project

Bold, bright and easy to decipher, Build It! 25 Creative STEM Projects for Budding Engineers by award-winning educator, engineer and author Caroline Alliston is the perfect book for any burgeoning engineers, as well as their teachers and parents. The glossy book features 25 STEM projects for children, varying from constructing a marble maze to building a clock. The book’s main aims – to teach children to think scientifically and creatively, while also showing them the applications of science in the real world – are well achieved, thanks to the clear directions and explanations provided throughout. Alliston provides a list of tools and materials that most of the projects require, as well as a detailed section on how to put together a circuit board for some of the projects. While most of the materials should be pretty straightforward to source, the claim that they are “easy-to-find objects from around the home” is an overstatement – I don’t know about you, but I don’t have a spare toggle switch or 13 V motor lying around. Despite this, Alliston does provide a handy list of websites from where you can purchase the necessary electrical parts.

Putting aside this small grievance, Build It! is a great project guide. The projects cover the spectrum of physical forces and concepts including light, air, water and electricity, as you build models that fly, zip around the floor and light up. Each project has a specified difficulty level from one (the easiest) to three (the hardest), and includes a “How it works” box that explains the basic principles behind the build.

Another good feature of the book is that the projects vary a lot in terms of complexity, time required to make them and the skills necessary – not to mention the final product. For example, “the glider” (a more solid version of a paper plane) would be a quick project as it requires only printing out a template and pasting together sections from the polystyrene discs that come with supermarket pizza. The “motorized buggy” on the other hand, if executed perfectly, will give you a driverless vehicle that you can programme to move, turn and even park. With the detailed cut-outs and circuitry necessary, you could easily spend a few days working on this. For the same reasons, the book has projects that could be done with very young children as well as teenagers; just pick the right project. All in all, Build It! is an excellent companion for parents and teachers looking for fun and engrossing projects to keep young hands and minds busy.

  • QED Publishing, Quarto Kids £10.99hb 120pp

Jovian giant

Jupiter

Ask anyone what their favourite planet in our solar system is, and it’s usually a 50:50 split between Saturn and Jupiter (all jokes about Uranus notwithstanding). But there seems to be something particularly captivating about the biggest planet in our system, with its swirling surface, Giant Red Spot and battalion of satellites (67 to be precise). Having fascinated humans for millennia, it’s no surprise then, that new books on the fifth planet from the Sun keep popping up, despite the many already available tomes on the subject.

Jupiter by William Sheehan and Thomas Hockey is the latest book in the Kosmos series by publisher Reaktion Books. Sheehan is a psychiatrist by training, but has long been an amateur astronomer and historian, and has written many books on the subject. Hockey is professor of astronomy at the University of Northern Iowa. Although A5 in size, this book is a glossy coffee-table title, packed with more than 100 images and illustrations. The opening chapters do a good job in tackling the birth of the solar system and all the Jovian planets; describing how they formed; before delving into Jupiter itself, layer by layer, from atmosphere to core. The book does contain a substantial amount of historical background, both observational and theoretical, but this is interspersed throughout the text rather than being clumped into the start, which might otherwise have slowed down readers.

Apart from talking about very early observations of the planet, the book covers all the many missions and probes that have visited Jupiter from Pioneer onwards, slowing peeling back the layers and mysteries of our favourite gas giant. Sheehan and Hockey’s language is clear and mostly lacking in jargon, if occasionally effusive, and the book is well-paced, if a bit clogged with facts and figures. The final chapter, “Juno to Jupiter”, is particularly interesting as it details some of our most recent discoveries thanks to the NASA mission, ending the book on a good note. While not being revelatory, Jupiter is a useful and practical planetary-science primer.

  • Reaktion Books £25hb 192pp

Towards renewable life-support systems in space

The atmosphere on Mars doesn’t support human life, so if humanity is going to travel to – and survive – on the red planet, we’ll need to pack our space rockets with everything needed to support life. That will be a heavy load, so it’s no surprise that space agencies are looking to develop life-support technologies and renewable fuel production that are lightweight and regenerative.

This makes photoelectrochemical cells, which convert solar energy to produce oxygen while simultaneously releasing hydrogen for fuel, an attractive option. Now, solar-fuel researchers in California and Germany have demonstrated a semiconductor half-cell that, unlike conventional designs, efficiently releases hydrogen in microgravity conditions.

“We wanted to figure out if we can actually do photochemistry in microgravity and produce fuels,” says first author Katharina Brinkert from the California Institute of Technology. But testing and designing a cell that’s capable of working under zero-gravity conditions proved quite the challenge.

Dropping into microgravity

Experiments in zero-gravity aren’t easily conducted here on Earth. But at the Center of Applied Space Technology and Microgravity (ZARM) in Bremen, Germany, there is a specially designed drop tower that enables short experiments under weightless conditions.

Photo of the ZARM drop tower

In the experiments at ZARM, all the equipment must be installed into a single capsule that is launched up the 120 m tower by a pneumatic piston, reaching 168 km/h. When the capsule falls it experiences weightlessness for 9.3 seconds, and it’s in that fall-time that experiments must be conducted. “We weren’t sure the experiment would work because we only had 9.3 seconds of microgravity to perform difficult electrochemistry,” says Brinkert.

Brinkert says that the research team relied on the expertise of the scientists at ZARM to set up and automate the experiment. And to make it easier, the team chose to test the simplest half of the photoelectrochemical cell, where hydrogen is released at the photocathode.

The researchers already knew that the efficiency of the electrodes in traditional solar-fuel cells falls in microgravity. “In microgravity conditions you have an absence of buoyancy and that makes the gas bubbles produced stick to the electrode surface,” says Brinkert. These hydrogen bubbles gather together to coat electrodes in a “froth layer” that increases resistance and reduces current and voltage through the electrode.

As expected, the team’s traditionally designed, indium-phosphide photocathodes with flat rhodium deposits experienced up to a 70% reduction in circuit voltage, with froth formation evident on camera footage. To prevent the formation of the froth layer, the California-based solar-fuel scientists sought help from Michael Giersig at the Freie University in Berlin, who is an expert at creating nanostructured surfaces that can alter the properties of component materials. While nanostructuring catalysts is nothing new in the solar-fuel community, Giersig specializes in shadow nanosphere lithography (SNL), which has not previously been used with solar-fuel cells in microgravity.

The nanostructured fuel cells were composed of the same indium phosphide, but the rhodium was deposited using SNL. Employing latex spheres as a type of template, the scientists were able to form the rhodium into 3D hexagonal nanostructures.

Tests in the drop tower revealed that the nanostructured cells performed much better than the traditional cells in microgravity, with only a 25% drop in voltage. Experiments in terrestrial conditions yielded the same efficiencies between catalyst designs, confirming that the difference in voltage generated in microgravity was related to surface topology.

Images of bubble formation in microgravity combined with theoretical analyses helped the researchers to understand how nanostructuring improved performance. “Our theory is that the bubbles are produced along the tips (of the 3D structures) and that these tips are so small that this limits the gas bubble growth,” says Brinkert.

Completing the cell

The researcher’s findings suggest a new design principle that could help realize simple and lightweight life-support systems for future space travel. Shaowei Chen, a professor of electrochemistry at the University of California, Santa Cruz, also thinks this technique could improve the performance of terrestrial water splitting technologies.

Brinkert and colleagues are now keen to advance their studies to look at the other half of the cell, which releases oxygen for life support. “We’d like to have two half cells working together, splitting water at the photoanode and feeding electrons to the photocathode to reduce the species you want to produce for fuel,” she explains. “Ultimately we’d like to take such a device up to the international space station and do experiments there.”

The research is described in Nature Communications.

Road map aims to cut environmental impact

A highly detailed map reveals global patterns of current and potential future road infrastructure. The Global Roads Inventory Project (GRIP) integrated many previous datasets with the hope of informing global policies to reduce the environmental impacts of road development.

“Roads are important for socio-economic development by providing access to resources, jobs, and markets, but they also bring about various environmental impacts,” says Johan Meijer of the PBL Netherlands Environmental Assessment Agency. “Ecosystems are affected mainly because roads provide access to otherwise undisturbed areas. This results in habitat fragmentation, deforestation, and reduced wildlife abundance though disturbance, road kills and overhunting.”

Along with these issues, road construction also increases emissions of greenhouse gases and air pollution, driving global climate change and posing significant health risks.

Many previous efforts have mapped global road networks using georeferenced data, from groups including governments, commercial and non-profit organizations and through crowdsourcing. Typically, these maps are outdated, and are biased in coverage towards more developed nations, particularly in Europe and North America.

Meijer’s team aimed to solve these issues by unifying information from almost 60 previous datasets. The georeferenced data covered 222 countries and 21 million km of roads – over twice the total length of any current dataset. By showing the position of every road they had data for – from local tracks to major highways – the researchers created a global roadmap with an unprecedented level of detail.

GRIP global road density map on 5 arcminute resolution (approximately 8 × 8 km at the equator), representing the densities summed across the five road types. (Courtesy: Johan R Meijer et al 2018 Environ. Res. Lett. 13 064006)

The team also created a regression model that incorporated variables including each country’s area, population density, and GDP. The researchers concluded that high densities of roads are most likely to be found in wealthy, densely populated countries.

“To derive potential future infrastructure developments, we applied our regression model to future population densities and GDP estimates,” Meijer explains. “We obtained a tentative estimate of 3 to 4.7 million km additional road length for the year 2050, a 20% increase compared to the current situation.”

The team concluded that many roads are likely to be built within globally important ecosystems. “Large increases in road length were projected for developing nations in some of the world’s last remaining wilderness areas, such as the Amazon, the Congo basin and New Guinea,” continues Meijer. “This highlights the need for accurate spatial road datasets to underpin strategic spatial planning in order to reduce the impacts of roads in remaining pristine ecosystems.”

The team’s focus is on supporting and improving global policy assessments and outlooks, according to Meijer. “In order to adequately quantify the benefits as well as the impacts of roads, using global assessment models, accurate and up-to-date georeferenced information…is essential.”

Ultrasound can trigger and enhance cancer drug delivery

Scientists in the UK have shown for the first time that focussed ultrasound from outside the body can improve the delivery of cancer drugs to tumours in humans. In the clinical trial, the team injected 10 patients with heat-sensitive capsules filled with a chemotherapy agent and then heated tumours with ultrasound. The technique could reduce the dose of toxic drugs needed to treat cancers and lead to news ways of dealing with tumours that are hard to treat with conventional chemotherapy, according to the researchers.

Delivering an effective dose of drugs to a tumour while minimizing toxicity elsewhere in the body is a major challenge in cancer treatment. One promising idea involves using drug-filled nano-capsules. These capsules increase the half-life of chemotherapy agents in the body and are designed to accumulate – either passively or through active targeting – in tumours. But they do not always release their payload effectively.

In the latest study, described in The Lancet Oncology, Paul Lyon and colleagues at the University of Oxford conducted a 10-patient phase 1 clinical trial to test the safety and feasibility of using focussed ultrasound to heat liver tumours and trigger the release of a chemotherapy drug from heat-sensitive, lipid-based carrier.

Temperature-sensitive carrier

All 10 patients had inoperable liver tumours. Under general anaesthetic, they each received a single intravenous dose of the chemotherapy agent doxorubicin encased in a temperature-sensitive liposomal carrier. A focussed ultrasound device, operating at a frequency of 0.96 MHz, was then used to heat the target liver tumour to over 39.5°C – which is the temperature at which the capsule is design to release the drug.

In six patients the temperature of the tumour was monitored using a temporary implanted probe, and tumour biopsies were taken before and after drug infusion, and after ultrasound exposure to estimate drug concentration within the tumour at different treatment stages. In the remaining four patients, biopsies were only taken after ultrasound exposure. The researchers used predictive models to calculate the ultrasound parameters needed to heat the tumours to a temperature in the range of 39.5–43°C. The researchers say this procedure better reflects how the technique might be used in clinical practice.

Following focussed ultrasound exposure, doxorubicin concentrations within the tumours increased by an average of 3.7 times. In seven out of the 10 patients there was at least a doubling of the drug within the tumour. One patient showed an estimated nine times increase in drug concentration within their tumour after ultrasound heating.

Safe trigger

The researchers say that their results build on decades of promising preclinical research to demonstrate that it is possible to safely trigger the release of cancer drugs deep within the body using focussed ultrasound. They add that the several-fold average increase in drug concentration seen highlights the clinical potential of such techniques.

“Only low levels of chemotherapy entered the tumour passively. The combined thermal and mechanical effects of ultrasound not only significantly enhanced the amount of doxorubicin that enters the tumour, but also greatly improved its distribution, enabling increased intercalation of the drug with the DNA of cancer cells,” explains Lyon.

This opens the way not only to making more of current drugs but also targeting new agents where they need to be most effective

Mark Middleton

“A key finding of the trial is that the tumour response to the same drug was different in regions treated with ultrasound compared to those treated without, including in tumours that do not conventionally respond to doxorubicin,” adds Oxford’s Mark Middleton. “The ability of ultrasound to increase the dose and distribution of drug within those regions raises the possibility of eliciting a response in several difficult-to-treat solid tumours. This opens the way not only to making more of current drugs but also targeting new agents where they need to be most effective.”

Precise anatomical location

Jeff Karp, Professor of Medicine at the Brigham and Women’s Hospital, in Boston, US, told Physics World that “this is a very well performed study in human patients”. He adds: “Although some limitations exist, this study demonstrates that by using thermo-sensitive liposomes in combination with ultrasound, it’s feasible to safely enhance the intratumoral delivery of therapeutic molecules to a precise anatomical location in human patients and improve the therapeutic outcome.”

Karp says that the next step in this line of research is to test this approach in other solid tumours and with other drugs. He adds that the development of ultrasound devices that can precisely target different types and sizes of tumours, and ultrasound-sensitive delivery vehicles may further improve drug delivery and reduce side effects.

 

Copyright © 2025 by IOP Publishing Ltd and individual contributors