Skip to main content

Gd-loaded nanoparticles plus monochromatic X-rays can destroy tumours

Energy dependence of tumour spheroid destruction.

The combination of gadolinium-loaded nanoparticles and monochromatic X-rays completely destroyed tumour spheroids within three days after 20 to 60 minutes of irradiation in a laboratory setting in Japan. The technique, which selectively amplifies the effect of radiation delivered to a tumour site, could eventually pave the way for a new type of cancer radiotherapy, according to researchers from Kyoto University’s Institute for Integrated Cell-Material Sciences (Sci. Rep. 10.1038/s41598-019-49978-1).

The research was conducted at Kyoto University and SPring-8, the largest third-generation synchrotron radiation facility in the world. The facility creates synchrotron radiation consisting of narrow, powerful monochromatic X-ray beams. These X-ray beams can be precisely tuned to target the K-shell of high-Z atoms, such as gadolinium, which causes ejection of inner K-shell electrons (K-edge activation) and triggers a series of events that releases Auger electrons. This approach, called photon activation therapy, has been shown to enhance DNA damage that can kill cells.

Fuyuhiko Tamanoi

Principal investigator Fuyuhiko Tamanoi and colleagues hypothesized that nanoparticles loaded with high-Z atoms located close to the nuclei of cancer cells could improve this photon activation therapy. They selected gadolinium as the high-Z material because it can generate Auger electrons and cause DNA damage. After loading gadolinium into mesoporous silica nanoparticles, they added the nanoparticles into a culture media of human ovarian cancer cells and confirmed that the particles could enter the cells without causing toxicity.

The researchers next prepared tumour spheroids from ovarian cancer cells that express green fluorescence protein. Fluorescence imaging confirmed that the nanoparticles were uniformly distributed in the spheroids. They then irradiated tumour spheroids without and with varying levels of gadolinium using monochromatic X-rays at 50.0, 50.25 and 50.4 keV.

Tumour spheroids containing 50 ng of gadolinium-loaded nanoparticles and irradiated with 50.25 keV X-rays broke up into pieces 72 hours after a 10 minute exposure. After 60 minute exposure, these spheroids were completely destroyed. Spheroids irradiated with 50.4 keV X-rays showed slightly less levels of destruction, while 50.0 keV X-rays caused almost no spheroid damage.

Tumour spheroids containing 10 or 20 ng of nanoparticles were only partially destroyed, and there was no damage at all when the nanoparticles did not contain gadolinium.

“Destruction of the tumour spheroids was exposure time dependent and was also dependent on the amount of gadolinium loaded to spheroids,” the researchers write. “The dramatic difference between the effect of 50.25 and 50.0 keV X-rays is consistent with the idea that the Auger electrons are exerting cellular effect.”

As for the slight difference in destruction efficiency between 50.25 and 50.4 keV, noting that similar levels of energy are likely to be absorbed, the researchers speculate that the energy release processes may have differing degrees of efficiency and/or that the energies of electrons released from the inner shell differ. “It is also interesting that tumour spheroids were broken into pieces after irradiation, which may suggest that the treatment has some effect on cell adhesion,” they write.

The researchers are hopeful that a compact X-ray generator capable of producing monochromatic X-ray beams in a clinical treatment facility will be developed for experimental and clinical use. They are now planning studies using animal model systems, Tamanoi tells Physics World. After this research is successfully completed, the nanoparticles will need to be approved for use in human clinical trials.

“My guess is that it will take more than five years to be able to use this technology in a clinic,” Tamanoi says. “But I would like to emphasize that our work opens up a possibility to develop a new type of radiation therapy. This could have a major impact on how radiation therapy is carried out.”

Plants receive nitrogen boost in hotter climes

Scientists in the US have shown that plant growth under extreme-warming conditions could be boosted thanks to more nitrogen in the soil. While plant growth is limited by the low level of nitrogen in the soil during modest warming conditions, the study shows that this is not the case in hotter temperatures due a surge of soil microbes that act to boost nitrogen supply. The researchers add, however, that this increase could be curtailed by the greater amount of carbon dioxide in the atmosphere.

Previous studies have shown that elevated carbon dioxide can boost plant growth, whilst increased temperature may have the opposite effect. But few studies have looked at the combined effects of increased carbon dioxide and temperature. Current projections suggest that both atmospheric carbon-dioxide levels and average temperature will increase over the coming decades. To understand what kind of impact this will have on ecosystems, Genevieve Noyce from the Smithsonian Environmental Research Center, and colleagues manipulated growing conditions at Kirkpatrick Marsh in Chesapeake Bay — a tidal marsh environment on the east coast of the USA.

Shoots and leaves

Using infra-red heaters, soil-heating pins and carbon-dioxide chambers the researchers carefully controlled the conditions on several different plots and measured both root and shoot growth of sedge plants over two growing seasons. The ratio of root-to-shoot growth indicates how much nitrogen is available to the plant as shoots put on more weight when nitrogen is readily available.

Under modest warming conditions — 1.7 °C above present day — they found that root growth outpaced shoots, indicating that plant demand for nitrogen outstripped supply. But under more extreme warming (5.1 °C above present day) shoots outpaced roots indicating that there was surplus nitrogen available in the soil. “Microbes generally become more active under warmer conditions, so as the soil warms up, the rate of microbial mineralization increases, which leads to more plant-available nitrogen being added to the soil,” says Noyce.

It is likely that our results apply to other unmanaged ecosystems including grasslands and forests

Genevieve Noyce

However, when elevated levels of carbon dioxide were added to the warming treatment, the response changed, with a swing back towards greater root growth. “As the temperature rises, soil nitrogen supply increases, but as carbon dioxide rises, plant demand for nitrogen also increases, so the net result is going to be the balance between the two,” says Noyce, whose findings are published in Proceedings of the National Academy of Sciences. “It is likely that our results apply to other unmanaged ecosystems including grasslands and forests, provided the soils contain adequate soil organic matter to be broken down by microbes to yield plant-available nitrogen.”

This response to climate change demonstrates the complex interaction between plants, which increase growth even at low levels of warming, and soil microbes that don’t increase their activity until it is significantly warmer. In recent decades rising atmospheric carbon dioxide has boosted plant growth and helped trap more carbon in land sinks. Levels of nitrogen in the soil have usually been the limiting factor to growth. But as temperatures continue to rise microbial activity looks set to boost nitrogen supply, such that it is no longer a limiting factor for plant growth.

Ice-water interface goes viscous

The liquid film that develops as an object glides across ice is as viscous as oil and much thinner than expected, say a team of researchers who have developed a way of probing the ice-water interface much more precisely than was previously possible.

Ice and snow have exceptionally low friction coefficients, making them good for skiing, skating and sledging, but dangerous for drivers on icy winter roads. Although these materials have been studied for more than 150 years, scientists still do not understand why they are so slippery.

Unanswered questions

Some have attributed the low friction coefficients to the formation of a thin layer of liquid water between the ice and the sliding object – caused, paradoxically, by frictional heating slightly melting the ice. However, this hypothesis raises many unanswered questions, says Lydéric Bocquet of the Physics Laboratory at the Ecole Normale Supérieure (ENS) in Paris. Water is a bad lubricant compared to oil, and the thickness and properties of the proposed interfacial water layer have not been measured. Indeed, its very existence has been under debate.

Now, however, a team led by Bocquet and his ENS colleague Alessandro Siria have used a new instrument – dubbed a stroke-probe tribometer – to measure the properties of this interfacial water layer. Their work shows that the liquid film does indeed exist, but it is just a few hundreds of nanometres to a micron in depth, and its viscoelastic properties resemble those of polymers or polyelectrolytes rather than simple water.

Tuning fork technique

Bocquet, Siria and colleagues studied “interfacial water” using a modified double-mode Tuning Fork Atomic Force Microscope (TF-AFM). The instrument they developed comprises a millimetre-sized probe ball glued to a macroscopic tuning fork. Although the fork is very similar to a piano tuning fork, it can be excited by a very low frequency vibration, typically several hundred Hertz. The system can be accurately modelled as a stiff mass-spring resonator with a quality factor of around 2500.

ice gliding experiment

When the researchers bring the vibrating ball at the end of the fork in contact with the surface of a centimetre-sized block of ice (using a piezo element with an integrated motion sensor of nanometric resolution), the lateral stroke of the ball slides across the ice with a fixed amplitude and velocity. The frequency of the system then changes, but so does its quality factor.

The ENS team use the frequency offset to measure the elastic properties of the contact surface, and the change in quality factor to evaluate the dissipation processes occurring there. The two measurements together give the layer’s interfacial viscosity.

“Listening” to forces

The researchers say the instrument allows them to “listen” to the forces between the probe and the ice with remarkable precision. Indeed, despite being centimetres in size, the instrument’s sensitivity is such that it is possible to probe ice contact and friction properties at the nanometre scale. “The system allows us to access several vibration frequencies, offering us the possibility to simultaneously probe the tribology of the contact (‘how it rubs’) by moving the ball in a lateral direction and its rheology (‘how it flows’) by moving the ball in a perpendicular direction,” Bocquet explains.

The experiments confirm the super-slippery nature of the interfacial ice, but they also – for the first time – confirm that friction generates a film of liquid water when the probe ball is set in motion. This film is, however, much thinner than previous theoretical calculations have suggested, and it is also as viscous as oil, with a viscosity of up to hundreds of mPa-s – two orders of magnitude larger than the viscosity of water. The researchers also showed that the film’s viscosity depends on the shear velocity – a behaviour known as shear thinning.

Crushed ice and water state

According to Bocquet, one interpretation for this unexpected behaviour is that surface ice does not completely transform into liquid water when an object glides across it. Instead, it may enter a mixed “granité-like” (crushed ice and water) state. This mixed film, he suggests, lubricates the contact between the solid ice and the ball and prevents any direct contact between the two surfaces.

Separate experiments by the ENS team show that making the probe hydrophobic reduces friction even further by modifying the interfacial viscosity. This “waxing” process is practiced empirically by skiers, but the reason why it made skis glide better was not previously understood.

Towards a new theory for interfacial ice

The team’s result means that existing theoretical descriptions for interfacial ice need an overhaul, Bocquet tells Physics World. A new theory would provide a better understanding of sliding on ice, which would come in useful in developing winter sports equipment or self-healing, ultra-low-friction lubricants for industrial applications. It might also help, conversely, to find ways of increasing friction, which is essential to avoid slipping on icy roads.

Angelos Michaelides of University College London, UK, who was not involved in the research, says that the ENS study is very exciting. “I am not aware of such a nice and elegant set of measurements on the friction of the quasi-liquid layer and think it is an extremely interesting new perspective on this age-old story,” he comments.

The research is described in Physical Review X.

Why fireworks are so important to science

Fireworks

Fireworks are essential to many of today’s celebrations – from national holidays and sporting events to musical concerts and the gatherings held on Bonfire Night (5 November) in Britain each year. Once upon a time, though, fireworks were serious scientific business. Designing the rocket and preparing the propellant and coloured fire required, after all, a detailed knowledge of chemistry and physics (see “Whizz-bang science” by Pierre Thebault, December 2018). Mounting effective firework displays required other skills too, including architecture, artillery, ballistics and even poetry.

Fireworks, it turns out, also played a critical role in the complex and evolving relations between science, the public and the state. That, at least, is the intriguing argument in a book by Simon Werrett, a science historian at University College London, entitled Fireworks: Pyrotechnic Arts and Sciences in European History (University of Chicago Press 2010). Previous histories of fireworks had ignored this connection. As Werrett put it, the true history of fireworks has “gone up in smoke”. His book brings it back.

Up in smoke

Fireworks originated in China, where by the 12th century they were routinely used in public spectacles. Werrett, though, focuses on the European story, which started in around the 14th century, when gunners began to develop a new genre of spectacle – “artificial fireworks” – for a general audience. The spectacles were called “artificial” because they were specially crafted for non-military purposes, and “fireworks” because they used gunpowder to produce fiery effects. The people who made the fireworks, meanwhile, were known as “artificers” and worked in spaces called “laboratories” (a name also used by alchemists) well before the modern scientific use of the term.

The first grand firework display over the Thames took place in 1613. Indeed, in his novel New Atlantis, published in 1620, one of the crucial tasks that the philosopher and statesman Francis Bacon assigned the scientists in his utopian world was to produce fireworks. Fireworks were on the way to becoming an important undertaking of nations, not so much because they demonstrated knowledge of military capital such as explosives and rockets, but because they symoblized power and authority.

“In a world without electric light,” Werrett writes, “fire was a powerful medium, a source of light and heat whose divine and magical connotations were strong”. Indeed, the ability to control, tame and exploit fire in spectacular and artistic displays seemed to demonstrate an ability to bring the divine and celestial down to Earth and under human control.

The ability to control, tame and exploit fire in spectacular and artistic displays seemed to demonstrate an ability to bring the divine and celestial down to Earth and under human control

By the end of the 17th century, fireworks had become an important element in public displays and extravaganzas in several European states. Monarchs gave resources to those who could manufacture them and stage their displays, and supported the institutions where they worked. Fireworks makers were encouraged to invent new and more dramatic effects, fostering a culture of innovation. Power and prestige came to those who could successfully innovate.

One of Werrett’s unusual stories involves the quest to create green fireworks. While artificers could produce most colours, green was difficult and in the early 18th century the ability to produce it became the subject of quests at Imperial courts – rather like the modern hunt for blue light-emitting diodes. Scientists at the St Petersburg Academy of Sciences eventually succeeded, and for a time were able to keep their knowledge a trade secret. The Russians, typically, attributed the discovery of green fireworks to Peter the Great himself. But the key breakthrough occurred at the St Petersburg Academy, when its scientists began treating fireworks as based on a chemical rather than a mechanical process.

In the 17th and 18th centuries, Werrett writes, Britain, France, Italy, Russia and other nations sought to outdo each other in the grandeur and scale of the fireworks displays they staged, with the manufacture of fireworks serving to promote science. How precisely this occurred depended on local conditions. At the time of the restoration of the monarchy in England, for instance, fireworks were sometimes associated with Catholic plotting and religious zeal, provoking a counter-reaction – but English philosophers and natural scientists also debated the significance of fireworks for understanding nature. In Russia fireworks appealed chiefly to the Imperial Court’s thirst for spectacle, which fostered its support for the country’s first generation of Western-style scientists.

In Russia fireworks appealed chiefly to the Imperial Court’s thirst for spectacle, which fostered its support for the country’s first generation of Western-style scientists

“With no scientific tradition in Russia,” Werrett writes, “academicians found that experimental lectures failed to interest the Russian nobility, whose support was critical to the survival of the academy. Simultaneously, academicians learned that the design or ‘invention’ of allegorical fireworks could improve their fortunes as spectacles appealing to the Russian court.” Werrett’s book opens, for instance, with a description of a firework display intended to symbolize the incremental but inexorable growth of the power and prosperity of the Russian state.

In the 1750s, seeking to exploit competition amongst their academicians, the St Petersburg Academy commissioned two of its prominent scientists – Mikhail Lomonosov and Jacob Stählin – to work separately on fireworks displays, with the intention of choosing whoever was better. Lomonosov was offended when Stählin’s was chosen, and announced that he was giving up firework-making. Fireworks were not only an important activity of the young academy, but also elevated its position and prestige, as well as of Russian science itself.

The critical point

The lesson I draw from Werrett’s book is that producing fireworks was not a hobby or side occupation that scientists tacked on to their “real” work. Scientists who produced fireworks were simply carrying on the practice of science, not trying to promote themselves or curry favour. A modern-day equivalent would be researchers consulting on governmental projects. Such activity is not only an integral part of the work of science, but it also bolsters the confidence of legislators and the public in science and their awareness of its value.

Science today needs more fireworks.

Voyager 2 spacecraft goes interstellar as it leaves the solar bubble

The spacecraft Voyager 2 left the heliosphere and travelled into interstellar space over the course of a day in November 2018, according to a suite of papers published today by scientists working on the mission.

The spacecraft was launched in 1977 along with its twin Voyager 1, which crossed-over into interstellar space seven years ago. Scientists analysing data from Voyager 2 have found both similarities and differences to the crossing of Voyager 1.

The Sun is surrounded by a huge bubble called the heliosphere that is inflated by the supersonic solar wind of charged particles emitted by the Sun. The edge of this bubble is called the heliopause, which is where the outgoing solar wind is halted by the interstellar wind of charged particles.

Different crossings

Both Voyager missions crossed the heliopause on the windward side of the bubble but at different locations. Voyager 1 left the northern hemisphere of the heliosphere and Voyager left the southern hemisphere at locations separated by about 160 au (1 au is the distance from Earth to the Sun).

Voyager 1’s departure point was about 122 au from the Sun, while Voyager 2 exited at 119 au from the Sun.  According to scientists working on the Voyager 2 mission, these slightly different distances could be a result of the exit events occurring a different times in the 11-year solar cycle. This cycle changes involves changes in the intensity of the solar wind that could make the size of the heliosphere fluctuate.

One big difference between the two spacecraft is that all five instruments onboard Voyager 2 are still functioning, whereas the plasma instrument that measures the solar (and then interstellar) wind was damaged on Voyager 1 in 1980. This meant that Voyager 1 was unable to measure the transition from the hot, low-density solar wind to the cold, high-density interstellar wind.

Thinner and smoother

Analysis of the Voyager 2 data suggest that the heliopause it encountered was thinner and smoother than the boundary crossed by Voyager 1. Indeed, Voyager 2 made the crossing in less than one day. The Voyager 2 data also suggested that the interstellar medium that the spacecraft first encountered is hotter than had been expected.

Voyager 2 also discovered a region between the heliopause and interstellar space where the solar and interstellar winds interact. This layer was not detected by Voyager 1.

Both spacecraft found little change in the direction and magnitude of magnetic fields across the heliopause. This is surprising because scientists had expected an abrupt transition between solar and interstellar magnetic fields to occur at the interface.

Gaining a better picture of the heliopause and heliosphere could provide important clues about how life emerged on Earth – and how it could emerge on exoplanets orbiting distant stars that would also be surrounded by bubbles. That is because the heliosphere shields Earth from many cosmic rays impinging on it – radiation that is harmful to life.

The Voyager 2 papers appear in Nature Astronomy.

UK research network to advance radiotherapy developments

A £56 million research network announced today by Cancer Research UK will transform the UK into a global hub for radiotherapy research. The network – Cancer Research UK RadNet – will accelerate the development of advanced radiotherapy techniques, including FLASH radiotherapy, MR-Linac treatments, proton therapy, stereotactic radiotherapy and artificial intelligence.

RadNet will unite seven centres-of-excellence across the country. The University of Manchester, the University of Cambridge and the CRUK City of London Centre (a partnership between UCL, Queen Mary University of London, King’s College London the Francis Crick Institute) will receive funding for infrastructure and research programmes, including the formation of new research groups. The Universities of Glasgow, Leeds and Oxford and the Institute of Cancer Research, London/Royal Marsden will receive funding for infrastructure.

“Radiotherapy is a cornerstone of cancer medicine, with around three in 10 patients receiving it as part of their primary treatment,” says Michelle Mitchell, chief executive of Cancer Research UK. “The launch of our network marks a new era of radiotherapy research in the UK. Scientists will combine advances in our understanding of cancer biology with cutting-edge technology to make this treatment more precise and effective than ever before”.

The Cancer Research UK RadNet aims to improve cancer survival by optimizing and personalizing radiotherapy. The centres will develop new techniques for delivering radiotherapy and investigate new radiotherapy–drug combinations, with a focus on reducing long-term side effects and improving patients’ quality-of-life. Projects will include innovative research into:

  • FLASH radiotherapy, in which pulses of high-dose of radiation are delivered in a fraction of a second. Early research suggests that FLASH has the potential to cause less damage to healthy tissue near the tumour than traditional radiotherapy.
  • Proton therapy. The Christie NHS Foundation Trust in Manchester is the first NHS hospital to provide high-energy proton therapy; the second centre will open at University College London Hospitals NHS Foundation Trust next year. RadNet will support researchers across the country to optimize this new technology.
  • Overcoming hypoxia. Hypoxic tumours are far less susceptible to radiotherapy. Scientists will develop better ways to identify hypoxic tumours and new treatments to oxygenate them, making radiotherapy much more effective.
  • Cancer recurrence. Researchers will investigate why some cancers come back after radiotherapy by studying the role of cancer stem cells. These cells are remarkably resistant to radiation, and just a few remaining after treatment can cause a recurrence. For some patients, targeting stem cells could be the key to unlocking radiotherapy’s full potential.
  • Drug development. Scientists will develop and test drugs, including immunotherapies, for use in combination with radiotherapy. They will also study how tumours can repair DNA damage caused by radiotherapy and use the latest gene-editing technology to develop drugs that interfere with this process.
  • Artificial intelligence. RadNet researchers will use AI to design personalized treatment plans based on data from patients’ scans. This could improve radiotherapy accuracy and provide treatment options for patients whose tumours were previously too risky to target with radiation.

“I’ve seen first-hand how successful radiotherapy can be for patients that I treat, but it’s been frustrating to see the UK lagging behind other countries when it comes to prioritizing research into this vital treatment,” says Adrian Crellin, Cancer Research UK trustee and former vice-president of the Royal College of Radiologists. “Cancer Research UK’s investment will overhaul radiotherapy research in the UK to bring the next generation of treatments to patients sooner.”

ASTRO showcase: RaySearch highlights machine learning innovations

In this short video, filmed at ASTRO 2019, Frederik Löfman of RaySearch Laboratories explains how machine learning can improve consistency and efficiency in clinical practice.

Magic-angle graphene reveals a host of new states

Last year, researchers at MIT lead by Pablo Jarillo-Herrero observed superconductivity in a pair of graphene layers engineered to be slightly misaligned. Now, a team at the ICFO in Barcelona, Spain, says it has seen a host of additional correlated states in the same “magic angle” system, providing a much more detailed view of how twisted bilayer graphene behaves and opening up new ways of studying strongly-correlated physics.

According to Dmitri Efetov, the study’s lead author, magic-angle twisted bilayer graphene represents a simple system in which to investigate novel phenomena that arise due to interactions between electrons in a material. The electron density in this platform can be tuned by applying an electric field, which allows the strength of the electron-electron interactions to be varied. It also allows the material to be tuned between different phases – for example, between the superconductor and the correlated state. Being able to do this could shed light on the underlying mechanisms at play in superconductors – especially high-temperature ones based on cuprates, for which a fundamental understanding is still lacking.

To create their testbed, Efetov and colleagues followed the “tear and stack” method previously developed by Emanuel Tutuc and colleagues at the University of Texas. The researchers stacked two sheets of atomic-thick carbon (graphene) on top of each other with a small angle misalignment. When the misalignment reached an angle of exactly 1.1° — the theoretically predicted “magic angle” — the MIT researchers found that the material became a superconductor (that is, able to conduct electricity without resistance) at 1.7 K. The effect disappears at slightly larger or smaller twisting angles.

Fundamentally new approach to device engineering

The MIT result kick-started a flurry of activity in “twistronics”. In this fundamentally new approach to device engineering, the weak coupling between different layers of 2D materials (such as graphene) can be used to manipulate the materials’ electronic properties in ways that are impossible with more conventional structures, simply by varying the angle between the layers.

Xiaobo Lu and Dmitri Efetov

The crystal structure of a single layer of graphene can be described as a simple repetition of carbon atoms, which is known as its unit cell. When two graphene layers are stacked on top of each other at a slight angle, they form a moiré pattern or “superlattice” in which the unit cell expands to a huge extent, as if the 2D crystal was artificially stretched 100 times in all directions. This stretching dramatically changes the material’s interactions and properties, and simply varying the angle between 2D material layers changes its electronic band structure. At small twists, the moiré graphene superlattices can even be switched from fully insulating to superconducting, as Jarillo-Herrero’s team discovered.

Improving material homogeneity

In the new work, Xiaobo Lu, a postdoctoral researcher in Efetovs’s group, improved the structural homogeneity of the bilayer twisted graphene by mechanically cleaning it to remove trapped impurities and release local strain between the layers.

When he subsequently changed the charge carrier density within a device made from the material by applying a varying voltage to it, he observed that the device could be tuned from behaving as a Chern insulator (a state where the material’s electron bands are all either filled or all empty, and for which the filled bands have a net total Berry curvature or Chern number) to a superconductor. It could also be made to form an exotic magnetic state in which magnetism arises because of orbital motion of the electron rather than (as in typical ferromagnets) the electron spin. Such a state, Lu says, has never been seen before.

Competition between many novel states

Lu explains that magic-angle bilayer graphene seems to be competing between many novel states. “By tuning the carrier density within the lowest two flat moiré bands, it alternately shows correlated states and superconductivity, together with exotic magnetism and band topology.”

The researchers say that the different states they observed are very sensitive to the quality of device. However, they do not fully understand why the material behaves this way. “For the time being, we only know that all the correlated states come from the electron-electron interaction,” Lu says. “Their ground states and the interaction mechanisms between these quantum phases remains a mystery for now.”

Another “astounding” finding according to Lu is that the device enters a superconducting state at the lowest carrier densities ever reported for any superconductor, Lu says.  This result may have implications for applications such as quantum sensing, since it makes the material more sensitive to most kinds of radiation. The team have already tried to integrate magic-angle bilayer graphene into single photon detectors to make devices that might be employed in quantum imaging, bio-photonics and encryption systems, to name just three examples.

They were also able to increase the superconducting transition temperature of the material to above 3 K, a value twice that previously reported for magic-angle graphene devices.

Emergent quantum effects

“I think this a very interesting experiment,” comments Jarillo-Herrero, who was not involved in the ICFO team’s work. “The authors have found an interesting set of correlated insulator and superconducting states, some of which had not been seen before. This shows that the phenomenology of magic-angle graphene devices is even richer than previously thought.”

“While the origin of the new states and the differences with the results obtained by other groups remains to be understood, I believe this work will generate great interest and more experimental and theoretical work on this very exciting subject.”

Efetov’s team includes scientists from the University of Texas at Austin, the National Institute for Materials Science in Tsukuba and the Chinese Academy of Sciences in Beijing, and Efetov says they will now be focusing on investigating the superconducting mechanism in twisted bilayer graphene. “We will also be developing entirely new experimental techniques to study these emergent quantum effects in twisted low dimensional quantum materials, including graphene,” he adds.

The research is detailed in Nature.

PET imaging sheds light on immunotherapy response

Whole-body PET images

Immunotherapy with checkpoint inhibitors is becoming an increasingly important tool in the treatment of several cancers. These checkpoint inhibitors, which block the proteins that stop the immune system from attacking cancer, can reactivate immune cells to enhance tumour killing. However, only a subset of patients respond.

A key determinant of successful checkpoint blockade therapy is the presence of CD8+ T cells, which play a central role in anti-tumour immune responses. As such, visualizing CD8+ T cells in vivo before and during treatment could provide insight into the mechanisms of immunotherapy and potentially predict a patient’s treatment response.

In a clinical trial sponsored by LA-based biotech company ImaginAb, a research team headed up at Memorial Sloan Kettering Cancer Center has now performed the first-in-human imaging of a tracer designed to non-invasively visualize the immune system (J. Nucl. Med. 10.2967/jnumed.119.229781).

The researchers used 89Zr-IAB22M2C, a minibody (antibody fragment) designed to target CD8+ T cells and radiolabelled for PET imaging. They tested the tracer, produced by ImaginAb, in a dose-escalation study of six patients with solid tumours (melanoma, lung cancer or hepatocellular carcinoma) who were undergoing or likely to receive immunotherapy.

Patients were injected with approximately 110 MBq of 89Zr-IAB22M2C, at minibody mass doses of 0.2 up to 10 mg. The researchers then performed whole-body PET/CT at various post-injection time points. They note that the infusion was well tolerated with no immediate or delayed side-effects observed for any mass dose.

Evaluating the biodistribution of 89Zr-IAB22M2C showed that the minibody targeted CD8+ T cell-enriched tissues, with high uptake seen in the spleen, bone marrow and lymph nodes. The biodistribution was dependent on the administered minibody mass, with uptake almost completely confined to the spleen at the lowest mass dose (0.2–0.5 mg).

The highest uptake was always seen in the spleen, followed by the bone marrow. As the mass dose increased, however, blood pool retention increased and higher liver uptake was seen at later times. The team notes that this may be a saturation effect, due to competitive binding from the increasing amount of cold (non-radiolabelled) minibody.

The researchers also took multiple blood samples from the patients to assess serum clearance. Clearance was typically bi-exponential and also depended upon minibody mass, with rapid extraction of lower minibody masses from circulation and slower serum clearance for higher masses (5–10 mg).

Tumour lesions showed variable 89Zr-IAB22M2C uptake among patients, possibly due to differing treatment profiles or the variable presence of CD8+ T cells. In two patients, PET revealed prominent uptake in lesions imaged at lower masses. Both patients were receiving immunotherapy and may thus have had a higher concentration of CD8+ T cells. Conversely, three patients with metastatic lung cancer did not show prominent uptake, possibly related to a lack of ongoing treatment with immunotherapy and therefore lower levels of T cells.

The researchers conclude that PET with 89Zr-IAB22M2C is safe and feasible, and that the minibody successfully targets CD8+ T cell-rich tissues. Initial results suggest favourable kinetics for early imaging within 6–24 hr, with uptake seen in normal lymph nodes and tumour lesions as early as 2 hr post-injection.

Due to the small number of patients in the study, the researchers could not establish differences in lesion uptake among minibody mass doses. However, lower masses (less than 5 mg) seemed to provide a more favourable balance of normal tissue and lesion visualization.

“This novel imaging agent has the potential to non-invasively assess the presence of CD8 T cells in patients’ tumours and results of this initial assessment are encouraging,” says lead author Neeta Pandit-Taskar. “With more research, this technology may ultimately serve a critical role as a biomarker of immunotherapy outcome and inform clinical trials of novel immunotherapies that act mechanistically through the presence of CD8 T cells.”

The ImaginAb team is now performing further evaluations in more patients and a study incorporating parallel biopsies is also accruing patients.

VFX in movies: from weightlessness to curly hair

Gravity

It’s almost ironic that, decades after failing to attend many of his own undergraduate physics lectures, Tim Webber found himself teaching his colleagues the physics they needed to do their job. As chief creative officer at London-based Framestore – one of the world’s leading visual effects (VFX) studios – he’d worked on blockbusters such as Harry Potter and the Goblet of Fire (2005) and The Dark Knight (2008). But it was his Oscar-winning work leading the visual effects on the Alfonso Cuarón movie Gravity (2013) that forced him to share his physics insights.

Gravity featured Sandra Bullock and George Clooney as space-shuttle astronauts fighting for their lives in a zero-gravity environment after their craft gets hit by space debris. According to Webber, the problem was that animators spend years developing the skill of creating virtual beings that don’t just look good, but also move in a way that suggests they have weight. “Suddenly,” he says, “they had to animate things that didn’t have weight, but still had mass.” It was a concept that Webber’s team struggled to get their heads around. “So I got them into a room and gave them physics lectures.”

His tutorials paid off. Webber – plus his colleagues Chris Lawrence, Dave Shirk and Neil Corbould – won the 2014 Academy Award for Best Visual Effects for their work on the movie. But then Webber has always had a creative bent. As an undergraduate at the University of Oxford in the early 1980s, he’d spend more time in arts studios than physics labs. Indeed, he’s one of many similarly inclined people who use their training in physics, engineering and maths in the VFX industry. And no wonder. When it comes to recreating a believable world on screen, physics is everything. “Maths and physics feature very heavily,” says Webber’s Framestore colleague Eugénie von Tunzelmann.

Before working at Framestore, von Tunzelmann – an engineer and computer scientist by training – was a visual-effects designer at another London VFX firm called Double Negative. While there, she worked on Christopher Nolan’s epic sci-fi movie Interstellar (2014) and ended up co-authoring a scientific research paper about Gargantua – the black hole that’s the focus of the film (Class. Quant. Grav. 32 065001). She wrote the paper with Paul Franklin, who had co-founded Double Negative, and another colleague from the firm, Oliver James. The trio had collaborated with Caltech physicist Kip Thorne (the fourth author on the paper) to create as realistic a simulation of a supermassive black hole as possible. The simulation won plaudits from physicists and Hollywood critics alike – and led to Franklin sharing the VFX Oscar in 2015.

From humble beginnings

Things have certainly come a long way since the iconic – but less-than-realistic – VFX of King Kong (1933) or Jason and the Argonauts (1963). The transition to computer-generated imagery (CGI) in films such as TRON (1982) and Jurassic Park (1993) was a game-changer, but there was still plenty that was unrealistic about the way light behaved, or creatures moved. This, though, is an industry that never stands still. “New techniques are constantly being developed,” says Sheila Wickens, who originally studied computer visualization and is now a VFX supervisor at Double Negative. “It is very much a continually evolving industry.”

These days, the industry has embraced what is known as “physically based rendering” whereby physics is “hard-wired” into the CGI. Industry-standard software now includes physics-based phenomena, such as accurately computed paths for rays of light. “The complex maths used in ray-tracing is in part based on maths developed for nuclear physics decades ago,” says Mike Seymour, a VFX producer and researcher at the University of Sydney whose background is in mathematics and computer science.

Other phenomena captured by today’s CGI include life-like specular reflection, which means that materials such as cotton and cardboard – which in the past did not reflect light in CGI scenes – are now modelled more realistically. A similar thing has happened with the inclusion of Fresnel reflection so that image-makers can account for the fact that the amount and wavelengths of light reflected depend on the angle at which the light hits a surface. Indeed, it’s no longer acceptable to make things up, or break the laws of physics, says Andrew Whitehurst, VFX supervisor at Double Negative, who won an Oscar in 2016 for his work on Alex Garland’s artificial-intelligence-focused thriller Ex Machina.

“When I began in the industry a little over 20 years ago, we cheated at almost everything,” Whitehurst admits. “Now, surfaces are more accurately simulated, with reflectometer research being implemented into code that describes the behaviour of a variety of materials. Our metals now behave like metals and, by default, obey the laws of energy conservation. Fire and water are generally simulated using implementations of the Navier–Stokes equations: the tools we use in VFX are not dissimilar to those used by researchers needing to compute fluid simulations.”

In fact, Whitehurst says, it’s hard to see how many things can be made any more realistic. “We can blow anything up we want, we can make anything fall down that we want, we can flood anything we want, and we can make things as hairy as we would like.”

Much of this accuracy comes out of deliberate research programmes, either in academia or within the studios themselves. The fact that filmmakers can now accurately model curly hair, for example, owes a debt to researchers at the renowned US animation studio Pixar, who, in the early 2010s, developed a physics-based model for the way it moves. Their model is described in a Pixar technical memo (12-03a) entitled “Artistic simulation of curly hair” by a team led by Hayley Iben, a software engineer who originally did a PhD at the University of California, Berkeley, on modelling how cracks grow in mud, glass and ceramics.

Brave
The Lion King

Modelling the movement of curly hair for animations, it turns out, is best done by representing hair as a system of masses on springs. The technique Iben and her team developed was used to great effect in the animation of the curly-haired hero Merida of Brave (2012), and later in films such as Finding Dory (2016) and The Incredibles 2 (2018). Admittedly, it’s more accurate to model hair as infinitesimally thin elastic rods, but this, the Pixar group says, straightens out the hair too much when in motion. Increase the stiffness to avoid this, and the hair takes on an unrealistic wiriness as the character moves their head.

Such compromises are important. After all, it’s the movie director who gets the final say in whether a visual effect works. “Being able to make something 100% real is actually just a stepping stone to making something cinematic,” says Seymour at the University of Sydney. “It is often critical to be able to create something real, and then depart from it in a believable way.”

Sometimes the departure doesn’t even have to be believable. Back at Framestore in London, von Tunzelmann recalls being asked to create fantasy fire where the flames curled in spirals. “There was no software that can do that, so we wrote a new fluids solver that measured the curl of the field and exaggerated it,” she explains. Everyone in the VFX industry, it seems, wants someone with a background in physics or engineering on their side (see box below).

Even in things where we are trying to play by the rules, we are going to bend them here and there

The same conflict between aesthetics and reality occurred in Interstellar. Whitehurst, who helped develop some of the VFX techniques used in the film, suggests that movie was both realistic and not. The team had to rewrite the ray-tracing software to account for the intense curvature of space around a black hole – and also had to dial down Gargantua’s brightness for the audience. “You can see exciting detail in it, and you probably wouldn’t be able to [in reality] because it would be so staggeringly bright,” Whitehurst points out. “Even in things where we are trying to play by the rules, we are going to bend them here and there.”

That human factor

For all the progress in making movie animations look as realistic as possible, one challenge still looms large: how best to represent human beings. Getting human features to look right for movie-goers is about more than just simple physics. We can make a human face that is photographically perfect – the issues of light transport through skin and modelling how wrinkles work have largely been solved. The difficulty is in the subtleties of how a face changes from moment to moment. The problem, Framestore’s Webber reckons, is that evolution has trained the human eye to analyse faces and work out if someone is lying or telling the truth, is healthy or unhealthy.

“Being able to recreate faces and fool the human eye is exceptionally tricky,” he says. And the truth is that we don’t even know what it is we see in a face that tells us something is awry: it’s a subconscious reaction. That means VFX designers don’t know how to fix a wrong-looking face – and just can’t generate one from scratch, let alone know how they might recreate a particular emotion that you want that character’s face to portray at that moment. “The last thing you want is a character saying ‘I love you’ when the eyes are saying the opposite,” Webber says.

And if you want to make life for a visual-effects designer even harder, try asking them to put a human face underwater, Seymour suggests. “The skin and mass of the face moves with gravity mitigated by buoyancy,” he says. “If they then quickly move a limb underwater near their face, the current produced by the simulated flesh of their hand needs to inform a water simulation that will affect their hair, their face-flesh simulation and any tiny bubbles of air in the water. These multiple things all interact and have to be simulated together.”

For now, animators compromise by combining CGI with motion capture, whereby an actor does their performance with dozens of dots glued to their face so that the image-processing software can track all the muscle movements from the recorded scene. VFX designers then use this information to create a virtual character who might need larger-than-life qualities (quite literally, in the case of the Incredible Hulk). Finally, they overlay some of the original footage to re-introduce facial movements. “This brings back subtleties that you just can’t animate by hand,” Webber says.

The Dark Knight
Harry Potter Goblet of Fire

It turns out that our eye is more forgiving when it comes to CGI representations of animals. That’s why we have seen a slew of movies led by computer-generated “live-action” animals, from Paddington (2014) to the recent remakes of Disney’s The Jungle Book (2016) and The Lion King (2019). Framestore has recently been working on Disney’s upcoming remake of Lady and the Tramp. Due out later this month, it mixes footage of real and CGI dogs – and the VFX are so realistic that many in-house animators can’t tell which is which, according to Webber. “People in the company have asked why a particular shot is in our showreel when it’s a real dog, and they have to be told – and convinced – that it isn’t!”

The new remake of Lady and the Tramp mixes footage of real and CGI dogs that’s so realistic that many in-house animators can’t tell which is which

Some things in movies, however, will never be truly realistic. Directors in particular want their monsters to move quickly because that’s more exciting. However, the laws of physics dictate that massive creatures move slowly – think how lumbering an elephant is compared to a horse – and our subconscious knows it. So when we watch a giant monster scurry across the screen, it can feel wrong – as if the creature has no mass.

“If Godzilla or a Transformer were actually to try to move at the speed they do in the movies, they would likely tear themselves apart, as F = ma last time I checked,” Whitehurst says. “This is a fight that I always have, and that everyone always has. But ultimately a director wants something exciting, and a Pacific Rim robot moving in something that looks like ultra-ultra-slow motion doesn’t cut it.” It’s a point echoed by Sheila Wickens, who studied computer visualization and animation at Bournemouth University and is now VFX supervisor on the BBC’s flagship Doctor Who series. “We usually start out trying to make something scientifically correct – and then we end up with whatever looks good in the shot,” she says.

That fight between directors wanting visual excitement and animators wanting visual accuracy is what made working on Gravity so special for Webber. He says the film was the highlight of his career to date – but also “by far the most challenging movie” he’s worked on. “All we were filming was the actors’ faces, everything else was made within the computer,” he says. The team had to write computer simulations of what would happen in microgravity when one character is towing another on a jet pack, and the result became a plot point. “We found that they bounced around in a very chaotic and uncontrollable way,” Webber says. “It’s literally down to F = ma, but Alfonso, who was working with us, really loved it and folded that into the script.”

That was quite a moment, Webber says. Suddenly, all his physics lectures – given and received – had been worth it.

Paths to success in the visual-effects industry

LightBox Gravity

If you’re a physicist who wants to work in the visual-effects (VFX) industry, what opportunities are available and what skills do you need – beyond a willingness to lecture your colleagues about the finer points of F = ma?

Yen Yau, a Birmingham-based project manager who trains newcomers in the world of film

Having worked on careers publications for ScreenSkills – the industry-led skills body for the UK’s screen-based creative industries – Yau says there is huge diversity in the paths people can take. “Certainly, physics is going to be more important in some roles, but there are numerous routes in for all types of backgrounds and experiences of applicants.”

Eugénie von Tunzelmann, a visual-effects designer at London VFX firm Framestore

While most workers in the VFX industry don’t have a background in science, technology, engineering and mathematics (STEM) subjects, she says there is a need for people with skills in those areas. Anyone working in a job that involves programming a software plugin that, say, defines how light bounces off a surface will almost inevitably have a background in physics or optics. If you were trying to model fluid flow, “you’d need to have an understanding of thermodynamics”, she says.

Andrew Whitehurst, Oscar-winning VFX designer

Describing himself as “an artist with an interest in physics and engineering”, Whitehurst says that many people in the VFX industry are physicists or engineers with a creative itch that won’t go away. “I work with people with physics doctorates or engineering doctorates and art-school dropouts but we all meet in the middle. I have no formal background in physics, but I have a reasonable passing knowledge of a lot of physics and engineering principles. I need to know why camera lenses do what they do, for example, so that we can mimic their behaviour.” But scientists working in VFX will have to learn how to compromise, he notes. “We use a lot of science and engineering, but we are not in the business of scientific visualization. I am an enormous respecter of science, but if I can make a more beautiful picture that tells the story better, I’m going to do it.”

Tim Webber, physicist who is now chief creative officer at Framestore in London

Even if you don’t need to understand the maths hidden in the software that the VFX industry use, Webber feels it helps to understand the principles of what the equations are doing, pointing to his experience early in his career working on a 1996 Channel 4 TV mini-series dramatizing Gulliver’s Travels and starring Ted Danson. “There are lots of small people and big people and we had to work out the angles, where to put the camera, so that the perspective would match what it would match in the other scale. I was using bits of paper and rulers and protractors and calculators.”

Copyright © 2025 by IOP Publishing Ltd and individual contributors