Skip to main content

Dose painting enhances preclinical radiotherapy studies

Preclinical studies play a valuable role in developing novel treatment strategies for transfer into the clinic. The introduction of image-guided preclinical radiotherapy platforms has enabled radiobiological studies using complex dose distributions, with photon energies and beam sizes suitable for irradiating small animals. But there remains a drive to create platforms with ever more complex irradiation capabilities.

Beam shape modulation could improve the conformality of preclinical irradiation, but systems based on fixed aperture collimators cannot easily deliver beam sizes down to 1 mm. Jaw techniques such as the rotatable variable aperture collimator (RVAC) offer another approach, but these are challenging to implement robustly for millimetre-sized fields and, as yet, unvalidated.

 A team at MAASTRO Clinic has now proposed another option: synchronized 3D stage translation with gantry rotation for irradiations from multiple beam directions. They demonstrated that this dose-painting technique can improve the precision of radiation delivery to complex-shaped target volumes (Br. J. Radiol. 10.1259/bjr.20180744).

The increased dose conformality that dose painting offers could benefit a range of preclinical studies. “The vast majority of radiobiology studies administered non-conformal dose distributions, which is not realistic compared to clinical practice,” explains senior author Frank Verhaegen. “Also, for studies on normal tissue response, being able to exactly target certain structures can lead to more relevant research results.” He adds that the technique can also be employed to deliver non-uniform doses, to study boost methods, for example, where hypoxic tumour regions receive higher dose.

Paint the target

Verhaegen and colleagues used the SmART-ATP planning system to design plans for delivery on the X-RAD 225Cx image-guided preclinical radiotherapy research platform, and simulated the plans using a Monte Carlo (MC) model of the platform.

Dose painting uses heterogeneous irradiations from multiple directions. For each beam direction, a 2D area is defined — based on the projection of the target volume — and divided into many single-beam MC simulations. The team validated their MC model using radiochromic film measurements of the field shape and dose output of several complex fields, including a 225 kVp, 2.4 mm diameter beam.

The researchers considered two scenarios based on a CT image of a mouse with an orthotopic lung tumour. In case 1, the target was the tumour, while case 2 targeted a length of spinal cord. They created dose-painting plans to deliver 10 Gy to the target using the 2.4 mm beam, resulting in 256 beams for case 1 and 280 beams for case 2. Beam-on times were optimized to achieve a D95% and V95% of 100%.

For comparison, the researchers also created plans using a fixed aperture collimator and an RVAC. They selected four gantry angles for case 1, (anterior, posterior, lateral and medial directions) and two opposed lateral angles for case 2. Fixed aperture plans used the smallest available collimator that achieved complete target coverage: a 5 mm diameter beam for case 1 and a 20 × 20 mm beam for case 2. RVAC plans were created using the optimal angle and beam aperture size.

 Balancing benefits

All irradiation methods achieved good target coverage and sharp cumulative dose–volume histograms for both cases. For the lung tumour irradiation, the increased conformality of the RVAC and dose-painting methods resulted in considerably lower doses to the left lung, trachea and heart compared with fixed aperture irradiation. Dose painting led to slightly lower dose homogeneity in the target.

For the spinal cord case, dose painting resulted in slightly better dose homogeneity in the target than achieved by the other two methods. Dose painting gave the best conformality, with a slightly larger volume of low doses, but considerably lower volume of higher dose, to both lungs. Fixed aperture irradiation resulted in far higher doses to all avoidance volumes.

These results suggest that for targets that match available fixed aperture field shapes and sizes, dose painting adds limited value. But where no fixed aperture collimators match the required field size, dose painting may surpass the plan quality achievable with RVAC. A major benefit of dose painting is that it can achieve conformal irradiation of concave target volumes. It also provides increased versatility to avoid organs-at-risk and deliver heterogeneous dose distributions to targets.

One disadvantage of dose painting is the greater radiation delivery duration, which can increase the risk of motion errors. Case 1, for example, required a total beam-on time of 1587 s with dose painting, compared with roughly 225 s for the other approaches. This results from the unavoidable trade-off between larger beam sizes with higher dose rates and smaller beams with higher spatial resolution but increased beam-on times. The team note that beam size should ideally be determined by the planning system based on time and plan quality constraints.

Dose painting also requires considerably more time for radiation planning and calculation. The authors suggest that with tighter process integrations and further software optimizations, data processing issues should not limit practical implementation of this approach.

The team note that since this irradiation strategy only requires more advanced software and no hardware modifications, it can be used to increase the versatility of current-generation image-guided preclinical irradiation platforms. “The plan now is to implement this new method in our treatment planning system and combine it with beam-optimization methods,” says Verhaegen.

Optical microfibres illuminate brain dynamics in detail

Optical fibres can be used to monitor neural activity by detecting the presence of a fluorescent protein. Researchers in the US have shown that bundles of hundreds or thousands of optical microfibres separate and spread after penetrating the brain, occupying a 3D volume of tissue. They also demonstrated that, when sampling density is high enough, source-separation techniques can be used to isolate the behaviour of individual neurons. The work will further the field of systems neuroscience by revealing how neural circuits relate to animal behaviour (Neurophotonics 10.1117/1.NPh.5.4.045009).

A relatively new way to monitor and influence brain activity, optical interfacing is used in transgenic animals whose neurons have been modified — either before birth or by an engineered virus introduced into the brain — to express a specific protein.

The protein used by Nathan Perkins and colleagues, at Boston University and the University of California, San Diego, changes shape when bound to calcium and, in this state, fluoresces green in response to blue light. Calcium-ion kinetics underlie one type of action potential in brain cells, so observing how these ions accumulate and dissipate gives an indication of neural activity.

Because of the scattering properties of brain tissue, direct illumination of target neurons deeper than about a millimetre is impossible to achieve with precision. Instead, excitation light and the resulting fluorescent signal are typically delivered and collected using implanted optical fibres.

The group previously demonstrated that bundles of several thousand optical microfibres — each just 8 μm across, or about the size of a neuron cell body — can be introduced to a depth of 4 mm without badly damaging the brain or provoking an immune response. As the tissue resists penetration by the microfibres, tiny mechanical forces cause their trajectories to diverge, spreading the individual tips over an area of about 1.5 mm2.

“The method can be readily extended down to 5 mm without any modification. Beyond that, a new challenge emerges, which is the flexibility of the fibres,” says Perkins. “This is not deep at all by human brain standards, but for mice and songbirds this allows us access to much of the brain.”

Simulated signals

Using stochastic simulations of the set-up, Perkins and colleagues modelled the fluorescence signal obtained for different neuron densities and microfibre counts, finding that the latter had the greatest influence on how many neurons could be detected simultaneously. At the lower end of the range, simulating a bundle composed of fewer than 200 microfibres, there was little overlap between excited regions, and each tip could detect the activity of a separate population of just two or three neurons.

As the microfibre count increased, the excitation light was delivered more uniformly throughout the modelled volume, causing more and more neurons to fluoresce at a level above the detection threshold. With a sufficient density of microfibres — and therefore illumination strength — the field-of-view of each tip expanded to encompass so many fluorescing neurons that any single cell would likely be recorded by more than one microfibre.

Under such a regime, the researchers showed, fluorescence signals picked up by separate microfibres can be correlated, enabling source separation techniques to be used to isolate the activity of individual neurons. The method is easiest to apply when signals are sparse (so the signature of a given neuron is not drowned out by background activity), and is especially effective with fluorescent indicators that peak sharply and fade quickly thereafter.

Until now, this degree of detail had been available only for superficial areas of the brain that can be imaged directly. At greater depths, researchers had observed individual neurons in isolation, or broad patterns of activity at the regional scale, but the long-term behaviour of a population of neurons was inaccessible. Understanding dynamics at this level is crucial to learning how the brain encodes information.

Music on the brain

An example of the sort of phenomenon that can be investigated with this approach has fascinated Perkins since the early days of his research: “Zebra finches are remarkable birds, where each male bird learns a unique song, inspired by that of their tutor (usually their father),” he explains. “They can then produce this song with amazing precision hundreds or even thousands of times per day for the rest of their life.”

The mystery, says Perkins, is that “neurons are able to produce this precise, stable behaviour even though they, like much of biology, are noisy and inconsistent”.

Previous experiments using optical recording for the brain’s surface, and surgically implanted electrodes at depth, showed that the patterns of activity behind the bird’s vocal performance remain stable even as the population of participating neurons varies. Although similar in form to the splayed optical microfibres used in the most recent research, the deep electrodes could not track the activity of individual neurons for longer than a day.

“Electrophysiological and optical interfacing offer very distinct trade-offs,” says Perkins. “Electrophysiological  has much better time resolution and requires no fluorescent protein, but electrodes rarely can record from a specific cell for a long period of time. Optical recording has more longevity, allowing specific neurons to be tracked over many days, and the ability to target particular neural sub-populations.”

The optical method has another advantage in that it can control neural activity as well as monitor it. This is achieved by engineering neurons so that they express the proteins necessary to form light-gated ion channels — light-sensitive versions of the structures that regulate ion transport across cell membranes.

Because these ion channels and the fluorescent indicator proteins respond to different frequencies of light, the two processes could be made to occur simultaneously. “By being able to record what the neurons are doing in one context and re-activating the same neurons in a different context, it is possible to further understand their role in controlling behaviour,” says Perkins.

Physicists propose huge European neutrino facility

An international team of researchers has proposed an ambitious new experiment that would involve firing neutrinos from a particle accelerator in Russia to a detector 2500 km away in the Mediterranean Sea. The researchers claim that the facility would provide unparalleled insights into the properties of neutrinos and elucidate the mystery of why matter dominates over antimatter in the universe.

Neutrinos are fundamental particles that are created in huge numbers by cosmic sources but can also be produced by nuclear reactors and particle accelerators. As they interact only weakly with matter they are difficult to detect. There are currently three known types of neutrino that can oscillate between their different “flavours” as they travel. It was long believed that neutrinos have no mass, but we now know that they have one of three tiny, discrete masses. Yet scientists have not yet been able to determine the relative ordering of the three neutrino masses as well as discover the extent to which neutrinos violate charge-parity symmetry — a finding that could help to understand why the universe is dominated by matter rather than antimatter.

The experiment proposal has substantial support within the particle physics community in Europe as well as in Russia

Dmitry Zaborov

There are already several “long-baseline” accelerator neutrino experiments that are in operation or being developed, which are attempting to shed further light on the nature of neutrinos. The T2K experiment in Japan sends neutrinos from the Japan Proton Accelerator Research Complex in Tokai with energies around 600 MeV to the Super-Kamiokande detector some 295 km away. The US-based NOvA experiment, meanwhile, operates at 2 GeV over the 810 km distance between Fermilab in Chicago and a detector based in Minnesota. The planned Deep Underground Neutrino Experiment project, which is currently under construction, will produce a 3 GeV beam of neutrinos at Fermilab that are then sent 1300 km to an underground detector in South Dakota.

Maximum oscillation

Researchers in Europe have now proposed their own long-baseline facility. A collaboration of 90 researchers from nearly 30 research institutes have published a letter of interest to build the Protvino-ORCA (P2O) experiment. In the letter, they explain how they would upgrade a 70 GeV synchrotron particle accelerator at Protvino — 100 km south of Moscow — to generate a neutrino beam. According to the plans, this would then be sent to the Oscillation Research with Cosmics in the Abyss (ORCA) detector, which is currently being built off the coast of Toulon, France by the KM3NeT collaboration.

Neutrinos achieve maximum oscillation at different distances depending on their energy levels. P2O — with its 2595 km baseline — would allow it to achieve maximum oscillation at neutrino energies of around 4-5 GeV. Astroparticle physicist Paschal Coyle, who belongs to the KM3NeT collaboration, says that these parameters make P2O ideal for disentangling the effects of mass ordering and charge-parity violation. “In other long baseline experiments there are ambiguities that make it harder to decouple the two contributions,” he adds.

The realisation of the P2O experiment will, however, not come soon. It would require the funding and construction of a new neutrino beamline at the accelerator in Protvino, aligned towards the ORCA site, as well as increases in the accelerator’s beam power from 15 kW to at least 90 kW. It may also require an upgrade to the ORCA detector, which has been designed to detect atmospheric neutrinos and is still under construction.

Dmitry Zaborov from the Kurchatov Institute in Moscow, who is one of the authors of the letter, says that P2O has been proposed for inclusion in the upcoming update to the European strategy for particle physics and also has the support of the KM3NeT/ORCA community. “The experiment proposal has substantial support within the particle physics community in Europe as well as in Russia,” he adds. The outcome of the strategy update will be concluded in May 2020.

Superconductivity wins dancing contest, scientists master the cheese fondue, and the first ever Web browser returns

Explaining your research, especially as a PhD student, can be a struggle. But communicating it via dance – that’s a challenge. Last week, the 11th annual “Dance your PhD” contest, sponsored by Science Magazine and the American Association for the Advancement of Science (AAAS), selected its winner: a physicist working on superconductivity.

Out of 50 submissions, the judges chose the work of Pramodh Senarath Yapa, who’s pursuing his doctorate at the University of Alberta, Canada. The topic of his research, superconductivity, relies on electrons pairing up when cooled below a certain temperature, and imagining these electrons as dancers was a natural choice for Yapa.

You can watch his winning submission here:

 

If you’ve got any special occasions to celebrate with friends, a good idea would be to treat yourselves to a cheese fondue. Wildly popular in the 1970s, this Swiss dish is now making a comeback.

It’s timely, then, that scientists have uncovered the secret of the perfect cheese fondue – which they say will optimize both texture and flavour. They found that you can prevent your fondue mixture from separating by including a minimum concentration of starch, which is 3 g for every 100 g of fondue. Also, according to this research, adding a bit of wine can help the mixture flow and taste better.

The results were published in ACS Omega.

Next month is the 30-year anniversary of the WorldWideWeb, the world’s first ever Web browser. Back in 1989 Sir Tim Berners-Lee proposed a global hypertext system to solve the growing problem of information loss at CERN at the time. This system, which he later named the “World Wide Web”, has since evolved into something we all use in our everyday lives.

In honour of this anniversary, a CERN-based team has rebuilt WorldWideWeb, so it can now be simulated and viewed in any modern browser. For those of you who are feeling nostalgic or curious, CERN has released links not only to the rebuilt browser, but also to some helpful instructions and explanations on how to use it.

The top five films about science or scientists 

Four years have passed since Physics World proclaimed “Science cleans up at the Oscars” — and things have only got better since then. Last year, The Shape of Water became the first sci-fi film to win Best Picture; this year, Black Panther might make it back-to-back wins for the genre (although Roma — a drama by Alfonso Cuarón, who won Best Director for Gravity in 2014 — is the bookmakers’ favourite).

The Shape of Water and Black Panther are both wonderful science-themed films. In particular, the Black Panther character Princess Shuri has been hailed as a role model for encouraging girls to study STEM subjects. But what are the best films about science or scientists? It’s a question you might like to discuss as you watch this year’s Oscars ceremony, which begins on Sunday 24 February at 5 p.m. Los Angeles time.

To help get your discussion under way, below is my suggestion for the top five films about science or scientists – in no particular order. They span 51 years and include two Stanley Kubricks, one Ridley Scott, one Steven Spielberg, and one film by the not-so-famous Shane Carruth.

None of these films were big Oscar winners: they only have four wins between them, and none of those were in the “big five” categories of Best Picture, Best Director, Best Actor, Best Actress or Best Screenplay. But all of them capture a special something about science or scientists.

Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb

1964 | Director: Stanley Kubrick

Dr. Strangelove is best known as a black comedy, with the actor and comedian Peter Sellers playing three of the lead roles. (Sellers was paid one million dollars for the film, leading Stanley Kubrick to remark: “I got three for the price of six.”) It is also a brilliant pre-echo of the kinds of issues discussed by Physics World‘s Anna Demming in her recent review of Hello World: How to be Human in the Age of the Machine: “when it comes to the stars of the numerous hapless human–machine encounters recounted throughout the book, their ill-advised approach was far from obvious”.

Fun Oscars fact  Dr. Strangelove was the first sci-fi film to be nominated for Best Picture. It lost to My Fair Lady.

2001: A Space Odyssey

1968 | Director: Stanley Kubrick

Let’s be honest: the day-to-day work that leads to scientific discovery can sometimes be mind-numbingly boring. In the original 161-minute version of 2001, Kubrick tried to capture this monotony with a lengthy scene where astronaut Dave Bowman simply jogs around and around (and around) the interior of the Discovery One spacecraft. It worked a bit too well: at the film’s premiere, the audience began booing, hissing, and saying “Let’s move it along” and “Next scene”. Following that disastrous premiere — 241 people walked out the theatre, including executives from the production company MGM — Kubrick cut 19 minutes from the film.

Fun Oscars fact  Kubrick was beaten to Best Director for 2001 by Carol Reed, who directed the musical Oliver! (“I think I better think it out again,” as Fagin might have said about one of the biggest snubs in Oscars history.)

Jurassic Park

1993 | Director: Steven Spielberg

Steven Spielberg’s third sci-fi blockbuster – after Close Encounters of the Third Kind (1977) and E.T. the Extra-Terrestrial (1982) – is the film most cited by scientists and science journalists when explaining new findings to the public. Between 1998 and 2017, Jurassic Park was referenced 21 times in Nature News, from “Jurassic Park got it right” in an article about how theropod dinosaurs such as velociraptors used their tails, to “Say goodbye to Jurassic Park” in a discussion about the difficulty of restoring species that have been extinct for more than a few thousand years. You’ll even find the film’s best quote being used to start an article in Physics World.

Fun Oscars fact  Of the films in this top five list, Jurassic Park is the only one to have had a good night at the Oscars — it won all three categories in which it was nominated (Best Sound Editing, Best Sound Mixing and Best Visual Effects). The only other Oscar winner in this list is 2001 (Best Visual Effects).

Primer

2004 | Director: Shane Carruth

As Niels Bohr once said: “How wonderful that we have met with a paradox. Now we have some hope of making progress.” He would have enjoyed Primer, a cult independent film in which two scientists — along with us, the audience — are brutally confronted with the paradoxes of time travel. It is the ultimate “hard” sci-fi film. Where 2001 was fairly light on exposition, Primer is an exposition-free zone. For 2001, Arthur C Clarke’s novel helpfully filled in the gaps; for Primer, there are (mercifully) a number of fan websites that do a fabulous job of explaining how it all works.

Fun Oscars fact  Primer is definitely not the kind of film that gets nominated for Oscars. However, it did win the Grand Jury Prize at the Sundance Film Festival, which is a prestigious event for independent film-makers.

The Martian

2015 | Director: Ridley Scott

If this was a list of the top five sci-fi films, Ridley Scott would probably have two entries: Alien (1979) and Blade Runner (1982). But this is a list of the top 5 films about science or scientists — and there is no film better than The Martian at showing us a scientist using science (and, on occasion, duct tape) to solve problems. In the words of astronaut Mark Watney: “In the face of overwhelming odds, I’m left with only one option. I’m gonna have to science the shit out of this.”

Fun Oscars fact  The Martian, which was nominated for seven Oscars in 2016, and Interstellar, which received five nominations the year before (it won Best Visual Effects), both feature Matt Damon marooned on an alien planet.

Preparing for a post-LHC future

Almost seven years after the discovery of the Higgs boson at CERN, how would you sum up the current state of particle physics?

We are at a very exciting time in particle physics. On the one hand, the Standard Model – the theory that describes the elementary particles we know and their interactions – works very well. All the particles predicted by the Standard Model have been found with the Higgs boson, which was discovered at the Large Hadron Collider (LHC) in 2012, being the last missing piece. In addition, over the past decades the predictions of the Standard Model have been verified experimentally with exquisite precision at CERN and other laboratories around the world. On the other hand, we know that the Standard Model is not the ultimate theory of particle physics because it cannot explain observations such as dark matter and the dominance of matter over antimatter in the universe and many other open questions, so there must be physics beyond the Standard Model.

Precise measurements of known particles and interactions are just as important as finding new particles

Is it concerning that the LHC has failed to spot evidence for particles beyond the Standard Model?

The discovery of the Higgs boson is a monumental discovery. It is one that has shaped our understanding of fundamental physics and has had an enormous impact not only on particle physics but also on other fields such as cosmology. We are now confronted with addressing other outstanding questions and this calls for new physics, for example, new particles and perhaps new interactions. We have found none so far, but we will continue to look.  To improve our current knowledge and to detect signs of new physics, precise measurements of known particles and interactions are just as important as finding new particles.

How are you tackling this at the LHC?

The Higgs boson is related to the most obscure sector of the Standard Model, as the part that deals with the Higgs boson and how it interacts with the other particles raises many questions. So we will need to study the Higgs boson in greater detail and with increasing precision at current and future facilities, which could be the door into new physics. At the LHC, we are addressing the search for new physics on the one hand by improving our understanding of the Higgs boson and on the other hand by looking for new particles and new phenomena.

CERN recently released plans for the Future Circular Collider (FCC) – a huge 100 km particle collider that would cost up to $25bn. Given the huge cost, is it a realistic prospect?

We are currently studying possible colliders for the future of particle physics beyond the LHC. We have two ideas on the table. One is the Compact Linear Collider (CLIC), which would produce electron-positron collisions from 380 GeV – to study the Higgs boson and the top quark – to 3 TeV.  The other is the FCC, a 100 km ring that could host an electron–positron collider as well as a proton–proton machine operating at a collision energy of at least 100 TeV. We are currently at the stage of design studies so neither of these projects are approved. At the beginning of next year, the European particle-physics community, which is updating the roadmap for the future of particle physics in Europe, will hopefully give a preference to one of them. Both CLIC and the FCC would be realized in several stages so that the cost will be spread over decades. As we push the technologies, which will also benefit society at large as the history of particle physics shows, we should also be able to reduce the cost of these projects.

Accelerators have been our main tool of exploration in particle physics for many decades and they will continue to play a crucial role also in the future

How might the FCC expand our knowledge of the Higgs?

The FCC as an electron–positron collider would allow us to measure many of the Higgs couplings – the strength of its interactions with other particles – with unprecedented precision.  The proton–proton machine would complement these studies by providing information on how the Higgs boson interacts with itself and how the mechanism of mass generation developed at a given time in the history of the universe. Both machines combined would give us the ultimate precision on the properties of this very special and still quite mysterious particle.

Japan is expected to give some indication about plans to build the International Linear Collider (ILC) in March. If Japan goes ahead, would CERN get behind the ILC as the next big machine in particle physics?

The fact that Japan is considering building a linear electron–positron collider demonstrates that there is great interest in the study of the Higgs boson as an essential tool for advancing our knowledge of fundamental physics. If Japan decides to go ahead with the ILC, it will undertake negotiations with the international community – Canada, Europe and CERN, the US and other possible partners – to build a strong collaboration. In this case, the most likely option for CERN would be to build a proton–proton circular collider that is complementary to the ILC.

If Japan goes for the ILC and China opts to build its own 100 km Circular Electron Positron Collider (CEPC), is there a danger CERN would get left behind?

The ILC and the CEPC are both electron–positron colliders. We know that we also need a proton–proton collider, which would allow us to make a big jump in energy and search for new physics by producing possible new, heavy particles. The FCC proton–proton collider would have an ultimate collision energy almost a factor of 10 larger than the LHC and would increase our discovery potential for new physics significantly.

You are halfway through your term as director-general of CERN; what are your plans for the second half?

One major goal in the months to come for our community, including myself, is to update the European strategy for particle physics – a process that will be concluded in May 2020. We will have to identify the right priorities for the field and start preparing the post-LHC future. Accelerators have been our main tool of exploration in particle physics for many decades and they will continue to play a crucial role also in the future.  The outstanding questions are compelling and difficult and there is no single instrument that can answer them all. For instance, we don’t know what the best tool is to discover dark matter. It could be an accelerator, an underground detector looking for dark matter particles from the intergalactic halo, a cosmic survey experiment or something else. Thus, we have to deploy all of the experimental approaches that the scientific community has developed over the decades and accelerators have to play their part.

Collaborations in particle physics are getting larger and larger sometimes consisting of thousands of scientists, do you think there is a danger of “group think”?

Collaboration is very much in the DNA of particle physics. CERN brings together 17,000 physicists, engineers and technicians from more than 110 different countries around the world. Collaboration is fundamental because the type of physics we do requires instruments such as accelerators, detectors and computing infrastructure that no one single country could ever realise alone. So we need to pull together the strengths, the brains and the resources of many countries to be successful in this endeavour. Small projects can obviously also do great science, but the Higgs boson, gravitational waves and neutrino oscillations could not have been discovered by small experiments conducted by small groups. Both small and big projects are needed, it depends on the question you want to address.

What role can science diplomacy play in today’s turbulent times?

Science can play a leading role in today’s fractured world because it is universal and unifying. It is universal because it is based on objective facts and not on opinions – the laws of nature are the same in all countries. Science is unifying because the quest for knowledge is an aspiration that is common to all human beings. Thus, science has no passport, no race, no political party and no gender.  Clearly, places like CERN and other research institutions cannot solve geopolitical conflicts. However, they can break down barriers and help young generations grow in a respectful and tolerant environment where diversity is a great value. Such institutions are brilliant examples of what humanity can achieve when we put aside our differences and focus on the common good.

High-speed camera probes the peeling behaviour of sticky tape

Researchers in France have made new discoveries about the way in which sticky tape peels away from surfaces. Stéphane Santucci and colleagues at the École Normale Supérieure in Lyon used observations from a high-speed camera to explore the complex physics involved in the peeling process on microscopic scales. Their observations have allowed them to identify strict mathematical relationships underlying the process, but questions still remain over the causes of some of its observed properties.

Peeling tape away from a surface is a familiar, yet often frustrating experience; while it remains firmly stuck to the surface at some points, it can peel away too quickly at others. The physics underlying this behaviour has been poorly understood until recently, but studies in 2010 uncovered a characteristic pattern in which peeling repeatedly stops and starts on scales of millimetres. Santucci and colleagues explored the scenario in further detail in 2015, revealing that this macroscopic stick–slip behaviour arises from energy being released close to the front separating the stuck and peeled tape.

Now, Santucci’s team have studied the process in unprecedented detail using a high-speed camera mounted to a microscope, together with an electric motor that precisely controls the velocity and angle of the peel. Their measurements revealed that when tape is stuck for longer periods of time, the subsequent slip would cover larger distances, with the two quantities following a strict cube-root relationship. Slip distances also increased linearly with the peeling angle and the stiffness of the tape.

Santucci and colleagues propose that the behaviour arises because both the tape’s adhesive, and the point at which it bends, build up elastic potential energy during sticking. Over the course of a slip, this potential is subsequently released at the tape’s separation front in the form of kinetic energy. From these insights, the researchers constructed a theoretical model incorporating the cube-root relationship underlying this energy budget. Their simulations accurately predicted the peeling behaviours of tapes with a variety of properties.

Intriguingly, the team’s models were able to recreate the waves that propagate across the separation front, perpendicular to the tape, at speeds of up to 900 m/s during a slip. The cause of this behaviour has yet to be explained, but the researchers compare it with the developments of cracks in solid materials – since in both scenarios new surfaces are created along propagating lines. In future work, Santucci’s team hope to use their simulations to learn more about this mysterious behaviour.

Full details are reported in Physical Review Letters.

Plasmonics technologies take on global challenges

Naomi Halas from Rice University

In 1861 James Clerk Maxwell rewrote our understanding of light when he began publishing a description of it in terms of electric and magnetic fields leapfrogging each other as they propagate through space. The resulting “Maxwell equations” arguably rank among some of the most elegant equations in physics – something that it took me a long time to appreciate as I yawned through electromagnetism undergraduate courses wondering whether the topic could ever get interesting. Nonetheless I went on to spend a further three years crunching through Maxwell equations to calculate electric field enhancements around silver nanoparticles while studying “plasmonics” under the guidance of  David Richards, professor of physics and vice-dean at King’s College London. At that point I was forced to concede that electromagnetism had genuinely gotten interesting. So it was with more than a little excitement earlier this week, that I walked past the lecture theatres and rooms where Maxwell had researched and taught at my old stomping ground, King’s College London, on my way to hear Naomi Halas one of the pioneers of the field talk at an event organised by Anatoly Zayats, now chair of experimental physics and head of the Photonics & Nanotechnology Group at King’s College London (and my former PhD examiner).

In her own words Halas was working in plasmonics before it even had a name. Hailing from Rice University in the US, she is the Stanley C Moore Professor in electrical and computer engineering, professor of biomedical engineering, chemistry, physics and astronomy, and director of the Laboratory for Nanophotonics, Nanoengineering, Plasmonics, and Nanophotonics. While she laughs off my epithet of “plasmonics pioneer” as we talk over coffee before the lecture, Zayats is quick to add, “Naomi is not only a pioneer in plasmonics but brought the field to real applications.” Her lecture highlights just how far these applications have now reached.

Halas starts by taking us back to a time that even predates Maxwell, to the person Maxwell appealed to for his reference for an appointment at King’s, Michael Faraday. (Zayats has shown Halas and I this very letter, now on display in the physics department.) In “From Faraday to tomorrow” Halas describes how Faraday studied flasks of liquids containing nanoparticles, dye-less solutions with vibrant hues later explained in the theories of  Gustav Mie. The colours arise from collective quantized oscillations of electrons in nanoscale structures in response to light, a behaviour of the so-called electron plasma that eventually acquired the name “plasmons”. On resonance the scattering of light by a plasmonic nanoparticle shapes an incident light field in a way that greatly outsizes the particle itself, and this attribute together with the ability to tune the resonance with nanoparticle size, composition and shape has sparked a wide range of new technologies.

Already in 1951 Arthur Aden and Milton Kerker had proposed the theory that more complex core-shell structures could provide additional “control knobs” for tuning these resonances. But theory is one thing, and when Halas began her career in plasmonics in 1990, nanotechnology was practically synonymous with sci-fi. “Our starting point in the field was working out how to create these particles,” she tells attendees. However as nanosynthesis approaches advanced, ideas for manipulating light with nanoparticles really began to unfurl.

But what can you do with it?

As Halas explains, researchers were quick to see potential in using plasmonic structures to manipulate light for new types of lenses and for sensing chemicals through resonance shifts. However, one of the first characteristics that caught her attention was the potential to excite resonant responses in the near infrared, a wavelength at which people’s bodies are transparent. Halas and her colleagues showed that since light at these wavelengths can pass through human tissues, it can excite a nanoparticle at a cancer tumour so that it heats up and destroys the cancerous cells. This photothermal cancer therapy is now already used in clinics, and thanks to developments in imaging there has been great success in using it to treat prostate cancer.

Halas goes on to explain that the same highly localized electromagnetic field enhancements that can kill a tumour, can be a powerful tool for tackling environmental resource issues as well. A billion people around the world lack access to clean water, but technology based on plasmonic nanoparticles can heat water to purify it without the high energy consumption that makes traditional reverse osmosis purification plants so expensive. Plasmons decay by releasing “hot” electrons that have a lot of energy, which can catalyse chemical reactions to occur at greatly reduced temperatures and pressures – again saving energy. Production of ammonia alone – a staple chemical widely used in agriculture – is responsible for 5% of the world’s energy consumption, so these nanoparticles have potential to make a huge difference. Halas also suggests that photoassisted catalysis could provide solutions to hydrogen production at sufficiently affordable costs for a more environmentally friendly hydrogen energy economy to finally take off. And developments to exploit plasmon resonances in “commoner” metals like aluminium – as opposed to the gold and silver predominantly used so far – could make these technologies more sustainable still.

It’s easy to see what is attractive about plasmonics research now, but what was it that first motivated researchers to pursue the field when all there was to go on was Faraday’s flasks of attractively coloured liquids? Zayats, whose research over the years has also shaped the field, points to the unique properties of plasmonic systems, their very strong field confinement and light concentration. “Then you start thinking ‘where can I apply this where can it be an advantage?’ – and this drives the research forward.”

Plasmonics developments now lie very much at the interface of several disciplines from quantum chemistry for precision production of nanoparticles with the right properties, physics to understand their behaviour, biology, medicine, electronics and catalysis to understand what this might mean for a particular application, and of course reams of regulatory and business knowhow if any of these technologies are to get to the market place. “It is inherently very multidisciplinary,” says Halas. “In every field you have the challenge of getting people from very different backgrounds, educations and orientations to learn to work together – I think that is the big science challenge, a big science human challenge.”

Melting polar ice sheets will alter weather

The global weather is about to get worse. The melting polar ice sheets will mean rainfall and windstorms could become more violent, and hot spells and ice storms could become more extreme.

This is because the ice sheets of Greenland and Antarctica are melting, to affect what were once stable ocean currents and airflow patterns around the globe.

Planetary surface temperatures could rise by 3°C or even 4°C by the end of the century. Global sea levels will rise in ways that would “enhance global temperature variability”, but this might not be as high as earlier studies have predicted. That is because the ice cliffs of Antarctica might not be so much at risk of disastrous collapse that would set the glaciers accelerating to the sea.

The latest revision of evidence from the melting ice sheets in two hemispheres – and there is plenty of evidence that melting is happening at ever greater rates – is based on two studies of what could happen to the world’s greatest reservoirs of frozen freshwater if nations pursue current policies, fossil fuel combustion continues to increase, and global average temperatures creep up to unprecedented levels.

“Under current global government policies, we are heading towards 3 or 4 degrees of warming above pre-industrial levels, causing a significant amount of melt water from the Greenland and Antarctic ice sheets to enter Earth’s oceans. According to our models, this melt water will cause significant disruptions to ocean currents and change levels of warming around the world,” said Nick Golledge, a south polar researcher at Victoria University, in New Zealand.

He and colleagues from Canada, the US, Germany and the UK report in Nature that they matched satellite observations of what is happening to the ice sheets with detailed simulations of the complex effects of melting over time, and according to the human response so far to warnings of climate change.

In Paris in 2015, leaders from 195 nations vowed to contain global warming to “well below” an average rise of 2°C by 2100. But promises have yet to become concerted and coherent action, and researchers warn that on present policies, a 3°C rise seems inevitable.

Sea levels have already risen by about 14 cm in the last century: the worst scenarios have proposed a devastating rise of 130 cms by 2100. The fastest increase in the rise of sea levels is likely to happen between 2065 and 2075.

Gulf Stream weakens

As warmer melt water gets into the North Atlantic, that major ocean current the Gulf Stream is likely to be weakened. Air temperatures are likely to rise over eastern Canada, central America and the high Arctic. Northwestern Europe – scientists have been warning of this for years – will become cooler.

In the Antarctic, a lens of warm fresh water will form over the surface, allowing uprising warm ocean water to spread and cause what could be further Antarctic melting.

But how bad this could be is re-examined in a second, companion paper in NatureTamsin Edwards, now at King’s College London, Dr Golledge and others took a fresh look at an old scare: that the vast cliffs of ice – some of them 100 metres above sea level – around the Antarctic could become unstable and collapse, accelerating the retreat of the ice behind them.

They used geophysical techniques to analyse dramatic episodes of ice loss that must have happened 3 million years ago and 125,000 years ago, and they went back to the present patterns of melt. These losses, in their calculations, did not cause unstoppable ice loss in the past, and may not affect the future much either.

Instability less important

“We’ve shown that ice-cliff instability doesn’t appear to be an essential mechanism in reproducing past sea level changes and so this suggests ‘the jury’s still out’ when it comes to including it in future predictions,” said Dr Edwards.

“Even if we do include ice-cliff instability, our more thorough assessment shows the most likely contribution to sea level rise would be less than half a metre by 2100.”

At worst, there is a one in 20 chance that enough of Antarctica’s glacial burden will melt to raise sea levels by 39 cm. More likely, both studies conclude, under high levels of greenhouse gas concentrations, south polar ice will only melt to raise sea levels worldwide by about 15 cm.

Gravitational waves could resolve Hubble constant debate

Simulations by an international team of researchers have shown that new measurements of gravitational waves could finally resolve the discrepancy in Hubble’s constant reported using different measurement techniques. Accumulating gravitational-wave signals from the mergers of 50 binary neutron stars, the scientists found, will yield the most accurate value of the constant to date – which would not only settle the debate but also confirm whether there are issues with the current standard cosmological model.

The Hubble constant represents the rate at which the universe is currently expanding and is vital for calculating both its age and its size. The constant is also widely used in astronomy to help determine the masses and luminosities of stars, the size scales of galaxy clusters, and much more besides. However, two different techniques for estimating the value of Hubble’s constant have yielded very different results

To measure Hubble’s constant directly, scientists need to know a galaxy’s outward radial velocity and its distance from the Earth. The first of these measurements can be obtained from the galaxy’s spectroscopic redshift, but the distance to the galaxy is more difficult to determine directly.

A common way of estimating distance is to exploit so-called “standard candles” – Cepheid variable stars or type 1a supernovae that have known absolute luminosities. In 2016 the best estimate for the Hubble constant obtained this way was 73.2 km s–1 Mpc–1 – vastly different from the value of 67.8 km s–1 Mpc–1 obtained in the same year  by studying the radiation of the Cosmic Microwave Background (CMB).The discrepancy is yet to be explained, since the values should agree if the standard cosmological model is correct.

In this new study, researchers from Europe and the US attempted to reconcile these two results. The scientists exploited the concept of “posterior predictive distribution” (PPD), a methodology often used to determine the reproducibility of experimental results. PPD relies on a dynamic view of probability – in other words, one that changes as new information is obtained.

In this case the scientists implemented PPD to simulate measurements of the Hubble constant using these two different methods, and to check their consistency with the standard cosmological model. One interesting finding is that there’s at least a 6% chance that the current discrepancy in the Hubble constant is purely due to random error.

They then simulated how new independent data could help resolve the debate. Gravitational waves from merging neutron stars seemed a promising avenue to explore, since their signal yields constraints on the distance to the binary stars. Measurements of gravitational waves should therefore provide an estimate of the Hubble constant without making any assumptions about the cosmology of the Universe.

The researchers found that 50 detections of gravitational-wave signals from merging neutron stars would be needed to properly arbitrate between the two different values for the Hubble constant. Including such a dataset within their PPD simulations would, they claim, yield the most accurate value of the Hubble’s constant yet measured – with an error of below 1.8%. Judging by current progress, observations of those 50 neutron-star mergers could well be achieved within the next decade.

Full results are published in Physical Review Letters.

Copyright © 2025 by IOP Publishing Ltd and individual contributors