Skip to main content

Overlooked for the Nobel: Lise Meitner

The discovery of nuclear fission in 1938 is among the most momentous events in 20th-century physics. Within seven years, this experimental and theoretical breakthrough – made jointly by Otto Hahn and Fritz Strassmann, who obtained the data, and by Lise Meitner and Otto Frisch, who interpreted it – led to the first atomic weapons. Less than a decade later, it led to the first nuclear power plants. If ever there was a discovery that should have won its instigators a Nobel Prize in Physics, nuclear fission is surely it.

But that’s not what happened. Though the terms of Alfred Nobel’s bequest would have allowed three of Hahn, Frisch, Meitner and Strassmann to share a prize, only Hahn got the nod, becoming the sole recipient of the 1944 Nobel Prize in Chemistry “for his discovery of the fission of heavy nuclei”. The contributions of Frisch, Meitner and Strassmann were relegated to a few lines in the official Nobel presentation speech, which took place in December 1945. That same speech, incidentally, claims that Hahn “never dreamed of giving Man control over atomic energy” – which is a bit rich, given that Hahn worked on the Nazi atomic weapons programme and was in fact still incarcerated by the British authorities at the time.

The 1944 Nobel Prize in Chemistry represents a strong challenge to anyone who claims that the Nobels are fair or reflective of how collaborative science works. Strassmann, whom the presentation speech patronizingly called “one of [Hahn’s] young colleagues”, had in fact been his assistant for the best part of a decade. For much of that period, Strassmann worked for half wages or none; his opposition to the Nazi regime meant that he was blacklisted from other jobs, leaving him dependent on Hahn and unable to develop a solo career. Meitner fared slightly better in the speech, since it did at least acknowledge her as Hahn’s collaborator for more than 30 years. Not mentioned, though, is the reason she was absent during the crucial 1938 experiments: Meitner, like her nephew Frisch, was an ethnic Jew, and her conversion to Protestantism 30 years earlier did not protect her from Nazi predation. In the summer of 1938, both Meitner and Frisch were forced to flee Germany. They made their seminal contributions to fission in exile, communicating with the Berlin-based Hahn and Strassmann by letter and telephone.

Of the three researchers left out of the 1944 Nobel Prize in Chemistry, the injustice done to Meitner is the most severe. Unlike the other “overlooked” physicists in this series, the records of her Nobel nominations are now public. They show that Meitner’s male colleagues (the scientists in the Nobel nomination pools were all male then, notwithstanding the existence of contemporary female luminaries like Ida Noddack and Iréne Joliot-Curie) nominated her for the physics Nobel 29 times, and for the chemistry Nobel 19 times. Her earliest nomination came from the Norwegian chemist Heinrich Goldschmidt in 1924. Her last was in 1965, three years before her death, when Max Born made her his fourth choice after Pyotr Kapitsa (who went on to win in 1978), Cornelis Gorter (who never won) and Walter Heitler (ditto).

The records do not entirely explain why none of these nominations were successful. However, they do suggest that Meitner has something other than gender in common with two other entries in this series. The chemistry Nobel committee in 1944 was as divided about the relative importance of theory and experiment as the physics committee was in 1957, when Chien-Shiung Wu was denied a share of the parity-violation prize that went to Chen Ning Yang and Tsung-Dao Lee. The 1944 committee also failed to appreciate the major role that Meitner, Frisch and Strassmann played in the fission collaboration, much as a later committee failed to understand that Jocelyn Bell Burnell was not merely Antony Hewish’s assistant in discovering pulsars. On top of that, a whole suite of prejudices – racial, sexual, political and disciplinary – seems to have made it impossible for the parochial chemists of neutral Sweden to see the contributions of a refugee Jewish woman physicist in their proper light.

Early in her career, Meitner faced and overcame a considerable degree of personal prejudice. In one laughable example, the Nobel laureate Emil Fischer refused to let her work in his lab because he thought women’s long hair was a fire hazard (apparently Fischer’s massive beard was perfectly fine). Meitner’s subsequent achievements – as well as nuclear fission, she also discovered the element protactinium – won her legions of admirers, 26 of whom went on to nominate her for a Nobel Prize at least once. In the end, though, neither her efforts nor theirs were enough to counterbalance the subtle, structural forces that helped (and still do help) to keep the Nobel prizes overwhelmingly white and male, 82 years after Lise Meitner’s earth-shattering discovery.

New microwave bolometers could boost quantum computers

Two graphene-based bolometers that are sensitive to detect single microwave photons have been built by independent teams of physicists. The devices could find a range of applications in quantum technologies, radio astronomy and even in the search for dark matter.

One bolometer was created in Finland by Mikko Möttönen and colleagues at Aalto University and VTT Technical Research Centre of Finland, while the other was created by an international team led by Kin Chung Fong at Raytheon BBN Technologies in the US.

A bolometer measures the energy of incoming radiation by determining how much the radiation heats up a material. Bolometers capable of detecting single microwave photons would be very useful in creating quantum computers and other technologies that use superconducting quantum bits (qubits). This is because superconducting qubits interact via microwaves and single photons provide a very efficient way of transferring quantum information between qubits.

Too slow

So far, however, creating single-photon microwave detectors has been difficult because of the relatively low energies of microwave photons. The Finnish team had addressed the low-energy problem by creating a bolometer that used a gold-palladium alloy to absorb photons. While this device operates at very low noise levels, it is not fast enough to be useful when it comes to measuring the state of a superconducting qubit.

Now, Möttönen and colleagues have replaced the gold-palladium absorber with one made from graphene, which has a very low heat capacity. This makes graphene ideal for making a bolometer because heat capacity is a measure of the energy required to raise the temperature of a material by one degree.

“Changing to graphene increased the detector speed by 100 times, while the noise level remained the same,” explains team member Pertti Hakonen. Indeed, the new device can make a measurement in less than microsecond, which is on par with technology that is currently used to measure the state of qubits. Hakonen adds, “After these initial results, there is still a lot of optimization we can do to make the device even better”.

Josephson junction

Meanwhile, Fong and colleagues created a bolometer in which graphene is integrated within a superconducting device called a Josephson junction. When the graphene warms up by absorbing a microwave photon, it affects the electrical current flowing through the Josephson junction – thus creating the detection signal. This device is a whopping 100,000 times faster than microwave bolometers based on other materials.

Team member Dmitri Efetov at ICFO in Barcelona comments “such achievements were thought impossible with traditional materials, and graphene did the trick again. This opens entirely new avenues for quantum sensors for quantum computation and quantum communication.”

Fong and colleagues and Hakonen and colleagues describe their bolometers in separate papers in Nature.

Climate change threatens future astronomical observations

Rising global temperatures could worsen seasonal El Niño events and cause telescope images to lose their quality. That is the finding of an interdisciplinary team of researchers in Germany, who predict that climate-change-induced alterations in temperature, wind speed, humidity and “seeing” – atmospheric turbulence that causes blurring in images – will reduce the clarity of observations at ground-based telescopes around the world. The researchers, led by Faustine Cantalloube of the Max Planck Institute for Astronomy, argue that climate change therefore needs to be taken into account when building new observatories such as the Extremely Large Telescope (ELT), which is currently under construction at Cerro Armazones in Chile’s Atacama desert.

In their study, the researchers focused on the European Southern Observatory’s facility at Cerro Paranal, which is about 20 km from Cerro Armazones. There are three critical parameters for operating astronomical instruments: integrated water vapour (IWV), relative humidity and cloud coverage. They found that high IWV levels at Paranal, which affect astronomical observations in the infrared, were associated with high central equatorial sea surface temperatures during El Niño events. Although such events are a natural, periodic phenomenon, increases in atmospheric CO2 concentration could make them more vicious, thanks to the associated global rise in humidity. A rise in humidity could also cause previously rare catastrophes such as the March 2015 flooding of the Atacama region to become more frequent.

Atmospheric turbulence affecting images

The study also examined the direct effects of increasing temperatures on the four telescopes that make up Paranal’s Very Large Telescope (VLT) array. The dome enclosures that house these telescopes are kept cool during the day so that when the dome opens at sunset, the telescopes are at a similar temperature to the outside air. This cooling is necessary because any temperature difference between the inside of the dome, where the telescopes are found, and its environment can cause air turbulence. The air turbulence distorts the starlight that reaches the telescopes, much like how stars appear to twinkle to the naked eye thanks to turbulence in the higher atmosphere.

The VLT’s current thermal active control system, however, cannot exceed target temperatures of over 16 °C. Since the build-up of atmospheric CO2 is already linked to an increase of 1.5 °C in average temperatures at Paranal to over the past 40 years – 0.5 °C more than the global average – additional local increases in surface temperature could further impair observations.

The researchers used worst-case climate change scenarios on models such as the Coupled Model Intercomparison Project Phase 6 (CMIP6) multi-model ensemble to determine climate projections for the region. The results suggest that by the end of the 21st century, Paranal could experience a further 4 °C increase in average temperatures. This would cause a greater difference between the temperatures inside and outside of telescope domes, and generate more air turbulence.

Adaptive optics can help correct for atmospheric turbulence. However, the time lag associated with these corrections creates an imaging artefact known as a wind-driven halo. Such halos are caused by winds from the southern subtropical jet stream, they are visible in ~30-40% of images, and they dramatically reduce the image’s contrast. Many wind-driven halos occur during El Niño events, which are likely to become more frequent as well as more intense as global temperatures rise.

Mitigating the impact of climate change on research

On a global level, Cantalloube notes that increased humidity and worse seeing are far from the most pressing threats to astronomy. “Fires in the US and in Australia already destroyed or were about to destroy observatories,” she explains. “This is the most imminent threat that is a consequence of climate change.” Cantalloube adds that hurricanes in Hawaii, which have become stronger in recent years “could also threaten the Hawaiian-based observatories.” As for Chilean observatories, she says, “the effects are there”.

According to Cantalloube, interdisciplinary teams like hers will become more prevalent over time as scientists seek to understand the many ways that climate change could impact research capabilities. She notes that her team’s paper, which appears in Nature Astronomy, is a preliminary study, focusing solely on the effects of climate change on observations conducted in Chile. Since climate change will likely affect different regions of the globe in different ways, the team hopes to extend its analysis to cover additional sites. In the meantime, Cantalloube urges members of the astronomy community to “investigate whether climate change and/or environmental signals are also detectable in the observations of other observatories/others fields of research” and “convince political leaders to act now against climate change”.

Nobel prize sizzle: building excitement in the run-up to the physics award 

Nobel topics infographic

Tomorrow, the winner(s) of this year’s Nobel Prize for Physics will be announced. This will the 14th Nobel prize that I have covered for Physics World, and I can honestly say that the excitement has never waned.

A big challenge for journalists on Physics World, however, is coming up with new ways of creating a bit of “sizzle” in the run-up to the announcement. This is difficult because unlike some other prizes, there is no public shortlist for the Nobel prize to whet the appetite. Indeed, who has been nominated and the deliberations of the Nobel committee are kept secret for at least 50 years – so it is impossible to know who is in the running and very difficult to understand how decisions are currently being made by the committee.

However, I am not complaining about this information deficit. I think it is one of the many things that keeps the Nobel prize fresh and exciting every year.

Last year I was very fortunate to gain some insights into the selection process when I interviewed the Swedish physicist Lars Brink, who served on the Nobel Committee for Physics on eight separate occasions. So if you are keen to learn about the finer points of how winners are selected, check out my article “Inside the Nobels: Lars Brink reveals how the world’s top physics prize is awarded”.

Predicting winners

One way of generating a bit of excitement in the run-up to the Nobel is to predict who will bag the prize. The problem with predictions is that mine are almost always wrong. And after a few years, I also run out of new people to predict as winners.

To see how good our Physics World predictions have been, I looked back at a blog I wrote in late September 2009 (“Nobel predictions”). The predictions were made by Physics World editors and the Nobel laureate Albert Fert. Here are the 13 predictions we made (all men, I’m afraid) and the years that some of them actually won a Nobel prize:  Alain Aspect, David Wineland (2012), Peter Zoller, Juan Ignacio Cirac, Anton Zeilinger, Michel Mayor (2019), Didier Queloz (2019), Andre Geim (2010), Konstantin Novoselov (2010), Yakir Aharanov, Michael Berry, Saul Perlmutter (2011) and Brian Schmidt (2011). That is better than 50%, so maybe I am underselling our ability to predict winners (I say “our”, because I only got one right).

Not content to rely on intuition to pick winners, in 2014 I took a close look at whether there is a temporal pattern in how prizes are awarded. I divided physics into seven disciplines and created an infographic that shows when prizes were awarded in those fields.

Quantum information this year?

The figure “Nobel Prize disciplines timeline” is version of the infographic updated to 2019 and if you squint at it for long enough you may convince yourself that there is a pattern in the prizes. For example, there has not been an award in the field of quantum physics for eight years – so, my only prediction for this year will be a prize to physicists working in the field of quantum information.

Migration world map

Another reasonable bet is that at least one of the 2020 laureates will be an immigrant. That prediction comes courtesy of our 2015 Nobel sizzle, when I created an infographic that reveals that more than 25% of physics laureates can be classed as immigrants. That was updated in 2019 and you can read more in “More than one-quarter of physics Nobel laureates are immigrants, reveal updated infographics”.

I think that our best bit of Nobel sizzle so far was last year’s series of pieces about our favourite Nobel prizes. My pick was the 1982 prize to Kenneth Wilson “for his theory for critical phenomena in connection with phase transitions”. I have always been fascinated by phase transitions, but I have to admit that it was a bit daunting to try to describe Wilson’s work in a few hundred words: (“My favourite Nobel prize: a universal theory for phase transitions”).

This year we are focussing on people who we think missed out on a Nobel. You can find those pieces on the Physics World website in a panel called “Overlooked for the Nobel”. I have contributed two pieces to this series: one championing the physicists at CERN who discovered the Higgs boson, and the other asking why the Chinese-American physicist Chien-Shiung Wu did not win a Nobel for her pioneering measurement of parity violation.

Compton imaging opens up new avenues for diagnostic imaging

Compton imaging, originally developed by astronomers for detecting gamma ray sources, is now also under investigation for clinical imaging. In particular, a high-performance Compton camera could prove invaluable for applications within nuclear medicine and molecular imaging.

Unlike the established medical imaging technique of PET, which can only visualize positron emitters, Compton imaging has the potential to visualize a variety of gamma ray sources. To date, however, Compton image quality has not matched up to that of typical PET scans. To investigate its potential further, researchers at the National Institute of Radiological Sciences (NIRS) in Japan have created a combined whole gamma imaging (WGI) platform to directly compare the two modalities.

“Compton imaging has potential to provide better images than conventional SPECT and PET, in particular for radionuclides emitting high-energy gamma rays,” explains first author Hideaki Tashima. “We expect to explore new radionuclides that have never been used for nuclear medicine.”

A Compton camera incorporates two detectors that work in synchrony. For an individual gamma emission, Compton scattering occurs in the first detector (the scatterer) and photoelectric absorption in the second (the absorber). Both detectors record the interaction positions and corresponding deposited energies, enabling reconstruction of a Compton cone that indicates the emission point.

To create a WGI system that can perform both PET and Compton imaging, the researchers inserted a scatterer ring inside a whole-body depth-of-interaction PET scanner (the absorber ring). To enable small-animal imaging experiments, they remodelled a previous WGI prototype by halving the diameter of the scatterer ring. As the spatial resolution of Compton imaging reduces in proportion to the source–detector distance, this modification should also improve the resulting images.

WGI prototype

System evaluation

Tashima and colleagues tested the WGI platform using 89Zr as the imaging source, as it decays via emission of a positron and a 909 keV gamma ray, enabling direct comparison of PET and Compton imaging. Alongside, they developed a 3D image reconstruction method incorporating detector response function (DRF) modelling, random correction and normalization. They report their findings in Physics in Medicine & Biology.

The researchers first evaluated the uniformity of the WGI prototype, by imaging a cylindrical phantom filled with 89Zr solution. When reconstructed without normalization, PET images exhibited ring artefacts and stripe patterns, while Compton images had higher coefficients of variation (COV) in the central region. The normalized PET and Compton images were of comparable quality, showing uniform radioactivity intensity throughout the phantom, and with COVs of 3.7% and 4.8% for PET and Compton images, respectively.

To assess the spatial resolution of the modalities, the researchers imaged a small rod phantom with clusters of cylindrical holes filled with 89Zr solution. The PET image clearly resolved the 2.2-mm diameter holes. The Compton image resolved the 3.0-mm holes in the peripheral region, with lower spatial resolution in the central region.

Finally, the team injected a mouse with 9.8 MBq 89Zr-oxalate and, one day later, used the WGI prototype to image the animal for 1 hr. The absorber ring (containing the PET detectors) had an axial length of 214 mm – enough to cover the whole animal. The PET image thus showed the entire mouse, revealing that 89Zr localized in its bones.

PET and Compton images

The scatterer ring was only 52 mm long, so the researchers positioned the animal’s torso inside the ring and its head and tail outside. Nevertheless, in Compton imaging mode, the WGI system could reconstruct the distribution outside the ring and create an image of almost the entire body. The Compton images agreed well with the PET images, clearly showing 89Zr in the mouse bones, although the image quality was better for regions inside the scatterer ring than those outside.

The researchers conclude that the WGI prototype could achieve Compton imaging with a quality approaching that of PET. They attribute this success to four key factors: the high energy of the gamma ray emitted from 89Zr, which improves the spatial resolution of Compton imaging; the DRF model used for image reconstruction, which further improved spatial resolution; the normalization step, essential for image uniformity; and the full-ring geometry of the WGI system.

Future work will focus on improving Compton imaging such that it outperforms PET. But the team’s ultimate goal is to unite the two techniques and implement a combined image reconstruction method. “Simultaneous measurement of different tracers with PET and Compton imaging can improve the efficiency of diagnosis,” explains Tashima. “In addition, reconstruction of a single tracer image by combining two types of signals can improve image quality through high sensitivity.”

“We are now exploring detector technologies for better energy resolution,” adds project leader Taiga Yamaya. “We are looking ahead to the realization of a clinical WGI system.”

D-Wave launches 5000 qubit system, solar-powered nanobeads could clean-up oil sands waste

D-Wave Advantage

The Canadian quantum computer maker D-Wave Systems has unveiled its latest platform, which contains a whopping 5000 qubits. Called Advantage, the system can be accessed via the company’s Leap 2 cloud service, which was launched earlier this year. The system is designed for use by businesses and D-Wave already counts several companies as customers, including the carmaker Volkswagen.

Your can read more about D-Wave system in “D-Wave’s 5,000-qubit quantum computing platform handles 1 million variables”.

Staying in Canada, petroleum production from Alberta’s oil sands produces effluent that is stored in tailings ponds – which are notoriously toxic. Now, researchers have used glass bubbles coated in titanium dioxide nanocrystals to deal with naphthenic acids, which are a particularly nasty group of chemicals found in the ponds.

The bubbles were developed at the University of Waterloo and their efficacy has been tested by a team led by Diane Orihel at Queens University in Kingston, Ontario. The team found that as the bubbles float on a pond, these use energy form the Sun to create radicals that destroy naphthenic acids. The coated bubbles are nontoxic and because they float, they can be gathered up and used again.

The research is described in the journal Facets and you can listen to Orihel explain the results to the CBC’s Bob McDonald on the science programme Quirks and Quarks.

Overlooked for the Nobel: Chien-Shiung Wu

The 1957 Nobel Prize for Physics was shared by Chen Ning Yang and Tsung-Dao Lee “for their penetrating investigation of the so-called parity laws which has led to important discoveries regarding the elementary particles”. However, some physicists argue that the Chinese-American physicist Chien-Shiung Wu should have shared the prize for providing the experimental evidence for Lee and Yang’s theoretical prediction of parity violation. Furthermore, some believe that Wu was denied the prize because she was a woman.

Based at Columbia University in the late 1950s, Wu designed an experiment that tested parity laws by observing beta decay at ultracold temperatures. While Wu was an expert in measuring beta decay, she collaborated with scientists at the National Bureau of Standards (NBS) in Washington DC (now NIST) to meet the cryogenic requirements of what is now known as the “Wu experiment”.

In an 2012 article for Physics World (“Credit where credit’s due?”) the Hungarian chemist and historian of science Magdolna Hargittai dug deep into the claims and counter claims of who, if anybody, should have shared the 1957 prize with Lee and Yang.

Three competing groups

In a nutshell, it is complicated. According to Hargittai’s article, Wu and colleagues saw the first hints of parity violation on 27 December 1956. Shortly after news of this preliminary result got out, an independent group working at the Columbia cyclotron did a quick measurement in early January and observed parity violation in muons – at about the same time as Wu and colleagues were making their definitive measurements. What is more, a third group at the University of Chicago had been looking for parity violation since the summer of 1956 and had also found preliminary evidence by December.

The Wu–NBS and Columbia cyclotron teams published their results in the February 1957 issue of the Physical Review  and the Chicago paper was published in the March issue of the same journal.

And therein lies a major barrier to Wu sharing the 1957 prize. According to the Nobel rules, the 1957 prize cannot be awarded for work published in 1957. Indeed, the Nobel Prize Nomination Archive shows that neither Wu, nor anyone else who had measured parity violation in 1956-57, had been nominated for the 1957 prize.

So what is Hargittai’s conclusion on whether Wu was overlooked for a Nobel?

“My view is that Wu made an outstanding contribution to bringing down the axiom of parity conservation in weak interactions,” writes Hargittai.  “But to say it was an injustice that she did not win a Nobel prize is an oversimplification of a complex story.”

Eminent supporters

It is worth pointing out that some of the most eminent physicists of the day did champion Wu’s case for a prize – including the Nobel laureates Willis Lamb, Polykarp Kusch and Emilio Segrè. Starting in 1958, she was nominated for the Nobel prize at least seven times until her death in 1997 (only nominations made before 1966 are currently available online to the public, which means there could, in fact, have been more).

So regardless of whether it was the rules or sexism that denied Wu a Nobel prize, there is no doubt in my mind that she deserved one – if not in 1957, then certainly thereafter. Indeed, writing in Physics World after Hargittai’s article was published, Yang said that Wu’s contribution went well beyond her considerable experimental prowess and included her deep perception of why parity must be tested. In that same issue, Herwig Schopper, who was one of those to have nominated Wu for a Nobel prize, expressed the view that Wu and indeed others who made the experimental measurements were denied a prize because of the rule that only three winners can be named.

Boron neutron capture therapy is back on the agenda

Boron neutron capture therapy (BNCT), a technique that deposits highly targeted radiation into tumour cells, was first investigated as a cancer treatment back in the 1950s. But the field remains small, with only 1700 to 1800 patients treated to date worldwide. This may be about to change.

“The field of BNCT seems to be progressing rapidly at the moment,” said Stuart Green, director of medical physics at University Hospital Birmingham. “The big difference compared with five or ten years ago is that the commercial interest from a variety of companies is strong now and this is driving the field.”

BNCT is a two-stage treatment. First, a 10B-containing drug (most commonly boronophenylalanine) is infused into the patient’s bloodstream. After a couple of hours, the drug accumulates preferentially in tumour cells and starts to wash out of healthy tissues. At this point, the tumour target is irradiated with low-energy neutrons, which split the 10B atoms into alpha particles and 7Li ions – highly ionizing particles that are delivered directly into the cancer cells.

“BNCT is high-LET [linear energy transfer] radiotherapy targeted at the cellular level,” Green explained. As neither the neutron beam nor the drug are toxic by themselves, damage to healthy tissue is minimized. “BNCT is relatively safe compared with administering radioactive drugs or chemotherapy, where toxicity may be experienced all around the body.”

Speaking at the Medical Physics & Engineering Conference (MPEC), Green updated on the status of BNCT programmes worldwide, noting that clinical experience is continually increasing. The US Food and Drug Administration has now approved two boron drugs for clinical use. But by far the majority of treatments, over 1150 to date, have taken place in Japan, initially using the Kyoto University Reactor in the early 2000s, and more recently using three Sumitomo accelerator systems in Kyoto, Fukushima and Osaka.

“Very importantly, earlier this year we had the first ever medical device approval for BNCT, for treatment in Japan of recurrent head-and-neck cancer,” said Green. “This is a significant marker for the entire field.”

The Sumitomo BNCT system

Several BNCT clinical trials are now underway in Japan using these accelerator-based facilities. A work-in-progress study examining one-year survival in patients with recurrent malignant glioma is showing promising initial results. Another trial, which formed the basis of the approval by the Japanese Medical Agency, examined three-month response rates of patients with unresectable local recurrent head-and-neck cancer. This approval should enable the team to start treating patients on a routine basis.

In Finland, meanwhile, around 250 patients have been treated using neutrons from a research reactor. This reactor closed in 2012, but Helsinki University Hospital is currently working with Neutron Therapeutics (which originated from MIT) to commission an accelerator-based BNCT facility. Clinical trials at this facility were planned to begin late in 2020 (now delayed due to the pandemic), initially examining recurrent head-and-neck cancer followed by glioblastoma and other indications.

The previous treatments in Finland included many patients with locally recurrent head-and-neck squamous cell carcinoma – a difficult-to-treat cancer that recurs in almost half of cases. Green described a recent US-based study looking at conventional retreatments of this disease. Patients retreated with intensity-modulated radiotherapy had an overall survival of 13.3 months, with 1.8% experiencing grade 5 toxicity (death); those receiving stereotactic body radiation therapy had 7.8 months overall survival with 0.5% grade 5 toxicity. In contrast, the (so far unpublished) Helsinki data showed that BNCT of similar cases conferred an overall survival of 25 months, with zero occurrences of grade 5 toxicity.

Elsewhere, Chinese company Neuboron is constructing a BNCT Centre at the Xiamen Humanity Hospital in China, with plans for other facilities in the future. These facilities will be based around a relatively compact accelerator-based neutron source designed by US firm TAE Life Sciences.

Green also described recent developments in the UK, where the University of Birmingham has received a £9m award from the EPSRC funding agency to develop a high-power neutron source. The source is slated mainly for testing and research into nuclear materials, but will also act as a user facility for other communities needing high-intensity neutron irradiation.

The source, which will be situated next to the university’s medical physics building, could be used to test new boron compounds in cells, for example, as well as to perform dosimetry, beam characterization and imaging studies. Green notes that builders arrived on site in the last few weeks, with accelerator delivery planned for August 2021 and full operation scheduled for February 2022.

The other big news in the BNCT field, said Green, is the collaboration between oncology software specialist RaySearch and various BNCT companies, including Neutron Therapeutics, Sumitomo and TAE Life Sciences. “RaySearch has taken on BNCT and is producing treatment planning tools to help us bring it into our clinical work,” he said.

“For the first time, there’s a substantial and sustained effort in the commercial sector to drive this field forward,” Green concluded. “We should keep an eye on BNCT over the next few years, there’s a lot happening, and hopefully our community can play a key role.”

Conjuring solitons in optical moiré lattices

Moiré lattices — wherever they form – are full of surprises. Superimpose two or more 2D periodic patterns on top of each other with a slight twist, and exotic properties will emerge.

A single sheet of graphene is a decent electrical conductor, but when two graphene sheets are stacked into a moiré lattice, the bilayer morphs into a superconductor, a Mott insulator, or a magnet; depending on the twist angle. Analogously, when two light beams “patterned” by masks interact, the resulting moiré lattice can transform the signal beam into a diffuse smear or a single localized spot. Recently, a group of researchers led by Fangwei Ye at Shanghai Jiao Tong University, China discovered that optical moiré lattices can also produce solitons – self-trapped solitary waves – at extremely low power levels.

Solitons in the spotlight

Light tends to disperse as it propagates. For example, a ray of light from a torch gradually spreads out. Earlier this year, Ye’s team discovered a way to stop the spreading and localize a laser into a tight spot using moiré lattices. Now, the same group has taken their findings a step further by exciting the light in moiré lattices into self-sustaining pulses known as solitons. Solitons retain their shape as they propagate over long distances, so they are important in telecommunications as steadfast information carriers.

The enemy of light localization is diffraction. Ye’s optical solitons fend off diffraction by relying on the nonlinear effect, a self-reinforcing phenomenon whereby the medium through which light shines modifies the light’s behavior . Ye’s medium is a photorefractive strontium barium niobite crystal with nonlinear holographic properties. The researchers imprinted a moiré pattern onto the crystal by shining a light beam stenciled by two twisted lattice masks. Then, the researchers shone a second light beam and observed how the beam profile evolved while they changed the masks’ twist angle and the laser powers.

The researchers discovered that their moiré lattices can produce solitons above a certain laser power threshold, depending on the twist angles in the patterning masks. Nonlinearity is a weak phenomenon that usually only manifests at high laser powers. But Ye and his team found that their power threshold is whoppingly low: only nanowatts of power is required – a million times weaker than a laser pointer.

“First observation”

“Our work is the first observation of solitons in moiré lattices,” says Ye. “It turns out, it’s quite easy to create solitons this way.”

The key to the low power requirement is the flat energy band in the moiré lattice. The photons in optical moiré lattices are squeezed into a narrow range of energies at certain twist angles. This energy range only supports certain self-trapped modes of light. The light diffraction is inherently much weaker in such flat energy bands, so only a small nonlinear effect is necessary to generate solitons.

“Thanks to the almost flat bands in moiré lattices,” says Ye, “this experiment brings the power threshold down to an extremely low level, representing a big step in soliton research”.

Moiré surprises?

Optical moiré lattices present a rich playground to look for other elusive nonlinear phenomena, such as four-wave mixing and second harmonic generation. According to Ye, solitons may be just the beginning.

Electrocaloric devices show potential for greener air conditioning

Ever-growing in their use, air conditioning systems use refrigerants that are powerful greenhouse gases. But independent teams in Europe and the US reckon they may have found a more environmentally friendly way to keep cool by using electricity to soak up heat by controlling the entropy of ceramic “electrocaloric” materials. They have shown how to increase the cooling power of the technique and say it could become competitive with conventional vapour-compression cooling systems.

Air conditioning currently consumes about 10% of the world’s electricity and could use far more in the future – with cooling units projected to grow from 1.2 billion in 2018 to about 4.5 billion in 2050, according to the Rocky Mountain Institute. The hydrofluorocarbons often employed as the refrigerant in these systems are efficient and nonflammable, but they are also very potent greenhouse gases – trapping far more heat when released to the atmosphere than carbon dioxide.

Caloric materials can in principle do a similar job as these refrigerants while emitting no pollution. The idea is to pump heat from a cool room to the hot outdoors, not by alternately compressing and expanding a fluid but instead by raising and lowering the entropy of a material by controlling its elastic, magnetic or electrical properties. In the latter case, this means using electric fields to control the polarization of dipole moments within a dielectric material.

Promising start

Research on the electrocaloric effect in ceramics got off to a roaring start in the early 1990s when scientists at the Moscow Power Engineering Institute in Russia claimed they could support a temperature difference as high as 12.7 °C between heat source and heat bath. Not relying on large compressors, pumps or magnets, the work held the promise of efficient, cheap and environmentally friendly air conditioners. But transforming those results into practical devices has proved hard going, with material properties quite different in bulk components compared to the thin films used in labs.

New research from David Schwartz, Yunda Wang, and colleagues at PARC, part of Xerox in California, does not break any temperature records but does, they say, show how lab-scale devices could be scaled up. They have used a large-volume fabrication technique often employed in the electronics industry to produce a solid-state device from multi-layer ceramic capacitors. The capacitors, each just a few millimetres across and made from lead scandium tantalite, were supplied by Japanese company Murata Manufacturing.

The heart of the PARC device has two layers of multi-layer capacitors lined up between copper rails and separated by insulators. The upper layer contains five capacitors, while the lower one has four although it is capped by an aluminium heat sink at each end. An actuator moves the top layer left and right so that four of its capacitors are always aligned with those below, while the extra one at either end comes into and out of thermal contact with the heat sink below it.

The Brayton cycle

Schwartz and colleagues used their device to carry out many rounds of a thermodynamic Brayton cycle, with one of the heat sinks being progressively cooled while the other served as the external heat bath. Cooling takes place in the first stage, with heat flowing from the four capacitors and the cold sink below to the five capacitors above. Then in the second stage the top layer is moved, and the electric field applied, which lines up the dipoles and thereby reduces their entropy. In compensation, however, the vibrational entropy of the material’s molecules goes up – resulting in an adiabatic temperature rise.

With the temperature of the upper layer now higher than that of the hot bath, the third stage sees the capacitor on the end dumping some of its heat into that sink. Finally, the electric field is turned off and the temperature of the upper layer drops below that of the lower, again adiabatically. The cycle then repeats.

Schwartz and co-workers found that when they applied an electric field of just over 10 MV/m, the capacitors underwent an adiabatic temperature rise (and fall) of 2.5 °C per cycle. With the cold sink slowly but steadily cooling over the course of about 100 cycles, they found that its temperature dropped by up 5.2 °C compared with the hot sink. They also measured a heat flux of 135 mWcm-2, which they say is more than four times higher than other electrocaloric cooling systems.

The researchers reckon that by adjusting the size and shape of their capacitors and making other tweaks to their system, they should be able to raise the heat pumping efficiency to over 50%. And that, they say, would make it “competitive with vapor compression cooling”.

Much higher temperature differential

In fact, Emmanuel Defay, Alvar Torelló and colleagues at the Luxembourg Institute of Science and Technology in Luxembourg have achieved a much higher temperature differential of up to 13 °C in a slightly different system. They also used multi-layer capacitors made from lead scandium tantalate supplied by Murata but achieved greater cooling by sending a dielectric fluid back and forth through the (porous) caloric solid.

Defay and colleagues argue that their temperature span “breaks a crucial barrier and confirms that electrocaloric materials are promising candidates for cooling applications”. They also reckon that by reducing the thickness of their capacitors (to 0.2 mm) and using water rather than a dielectric they might achieve a span as high as 47.5 °C. But their technology is still at a relatively early stage, and say that practical applications would require capacitors with higher breakdown fields as well as better electrical insulation.

The PARC results and the Luxembourg results are both reported in Science.

Copyright © 2025 by IOP Publishing Ltd and individual contributors