Skip to main content

Chiral logic gates create ultrafast data processors

Nonlinear optical material that generates an output signal that’s dependent on the chirality of two input beams

Light-based optical logic gates operate much faster than their electronic counterparts and could be crucial for meeting the ever-growing demand for more efficient and ultrafast data processing and transfer. A new type of “optical chirality” logic gate developed by researchers at Aalto University works about a million times faster than existing technologies.

Like electrons and molecules, photons have a so-called intrinsic degree of freedom known as chirality (or handedness). Optical chirality, which is defined by left-handed and right-handed circularly polarized light, shows great promise for fundamental research and applications such as quantum technologies, chiral nonlinear optics, sensing, imaging and the emerging field of “valleytronics”.

Nonlinear optical material

The new device works by using two circularly polarized light beams of different wavelengths as the logic input signals (0 or 1, according to their specific optical chirality). The researchers, led by Yi Zhang, shone these beams onto atomically thin slabs of the crystalline semiconductor material MoS2 and onto bulk silica crystals. These nonlinear optical materials can generate light at a different frequency to that of the input beams.

Zhang and colleagues observed the generation of a new wavelength (the logic output signal). By adjusting the chirality of the two input beams, four input combinations – corresponding to (0,0), (0,1), (1,1) and (1,0) – are possible. In the nonlinear optical process, the generated output signal is considered as logic 1 or logic 0 based on the presence or absence, respectively, of this output signal.

Chiral selection rules

The system works thanks to the fact that the crystalline materials are sensitive to the chirality of the input beams and obeys certain chiral selection rules (related to the MoS2 monolayer’s threefold rotational symmetry). These rules determine whether or not the nonlinear output signal is generated.

Using this approach, the researchers were able to make ultrafast (less than 100 fs operating time) all-optical XNOR, NOR, AND, XOR, OR and NAND logic gates, as well as a half-adder.

And that’s not all: the team also showed that a single device could contain multiple chirality logic gates operating at the same time in parallel. This is radically different to conventional optical and electrical logic devices that typically perform one logic operation per device, says Zhang. Such simultaneous parallel logical gates could be used to construct complex, multifunctional logic circuits and networks.

The chirality logic gates can also be controlled and configured electronically in an electro-optical interface. “Traditionally, the connection between electronic and optical/photonic computing has mainly been realized through slow and inefficient optical-to-electrical and electrical-to-optical conversion,” Zhang tells Physics World. “We demonstrate electrical control of the chirality logical gates, opening up an exciting prospect for the first and direct interconnection between electrical and optical computing.”

“Based on this, we hope that all-optical computing modalities can be realized in the future,” says Zhang.

The researchers, who report their work in Science Advances, now hope to improve the efficiency of their chirality logic gates and reduce their power consumption.

Wind energy could power human habitations on Mars

Wind energy could help power human missions on Mars, according to a study that used the NASA Ames Mars Global Climate Model to calculate the short-term and seasonal variability of wind power that would be generated by wind turbines on the Red Planet. Led by NASA’s Victoria Hartwick, the research team suggests that the wind could supply sufficient energy on its own or be used in conjunction with solar or nuclear power.

The success of a crewed mission to Mars would rely on many factors including site selection. Previous studies of site viability have focused on access to physical resources including the availability of water or shelter and have not necessarily accounted for the energy-generation capabilities of potential locations. While there has been a lot of research on solar and nuclear energy as Martian energy sources, nuclear power harbours potential human risks and current models of solar systems lack the energy storage capability to compensate for day/night (diurnal) and seasonal variations in generation. It is, therefore, prudent to consider an alternative source such as wind for stable energy production.

Less forceful, but still useful

Wind power is most efficient when atmospheres are thick, but Mars’ low atmospheric density means that wind on the planet produces significantly less force than wind on Earth. For this reason, the Martian wind had not been regarded as a viable energy resource. Hartwick and colleagues have challenged this assumption and shown that diurnal and seasonal fluctuations in solar energy could be compensated for by wind energy. Hartwick says that they “were surprised to find that, despite Mars’ thin atmosphere, winds are still strong enough to produce power across large portions of the Martian surface”.

The study suggests that wind could work in combination with other energy resources such as solar to boost power generation. This could be especially helpful during local and global dust storms, when solar power decreases and available wind power increases. Wind would also be a useful resource at night and around the winter solstice.

Combined system

The team looked at a hypothetical generation system that comprises solar panels and an Enercon E33 wind turbine. The latter is a medium-sized commercially available system that has a rotor diameter of 33 m and is rated at a power output of 330 kW on Earth. Hartwick and colleagues calculate that the turbine could operate at an average operational power output of about 10 kW on Mars

The team’s calculations show that the turbine would increase the percentage of time that the power from the combined system exceeds 24 kW from 40% (solar arrays alone) to 60-90% (solar plus wind). The value 24 kW is significant because it is considered the minimum power requirement to support a six-crew mission.

While the study shows that wind generation is possible, it would only be useful if it could be done in locations on Mars that are suitable for human habitation. Previous work considered geology, resource potential and engineering limitations to evaluate landing sites. Using these criteria, the NASA Human Landing Site Study has identified 50 potential regions of interest. This study did not consider regional energy availability beyond simple latitude and shading considerations for solar. Hartwick therefore believes that wind power could allow more regions to be considered for exploration and settlement.

More opportunities

“By utilizing wind in combination with other energy resources,” says Hartwick, “it may be possible to access some regions of the planet that were previously dismissed, for example, the Mars midlatitudes and polar regions which are scientifically interesting and are closer to important subsurface water ice reservoirs.” These sites would not be viable with solar power being the predominant energy resource.

Hartwick suggests that stability is the most important consideration for powering future crewed missions to Mars – a lot of uninterrupted power must be produced. Using a combination of wind turbines and solar arrays could allow missions to locate throughout a large portion of the planet.

Wind power could also revolutionize how humans obtain energy elsewhere in the solar system. Hartwick says she is “particularly interested to see the power potential on a moon like Titan, which has a very thick atmosphere but is cold”. Nonetheless, there is still interdisciplinary work to be done – especially from an aerospace and engineering standpoint – to determine operational efficiency and technical viability.

Different turbines

While the main part of the research focused on the Enercon E33, the team also looked at different sizes of turbines ranging from microturbines used for small single-family power needs to industry-standard 5 MW (on Earth) turbines, and more. The use of such systems could vary from providing energy for surface habitats and life support systems to maintaining scientific equipment. Another factor that must be considered is transporting wind turbines and associated materials to Mars – a process that would have to minimize the mass sent through interplanetary space. While this transport would have to include excavation equipment, there is some suggestion that Martian soil could be used a replacement for the concrete that is used to anchor turbines on Earth.

As more potential Martian landing sites are identified, future studies could involve high-resolution simulations with the aim of better understanding how specific topography and surface conditions affect the wind. This could change the capabilities of future space operations. Hartwick says that this “is really the gold standard when we consider the energy requirements for a potential human mission to Mars.”

The research is described in Nature Astronomy.

Instead of celebrating the lone genius in physics, we should focus on collective efforts

Why is physics different from other sciences? While our perceptions of who does science are broadening – at least in terms of gender – our default image of a physicist remains firmly stuck in the past. Recent studies have shown that we still picture physicists as people who are more innately brilliant, more socially awkward, and less collaborative than those in other scientific disciplines.

These stereotypes are not only wrong – they can be damaging and can even put prospective physics students off entering the subject. For those already in the field, such stereotypes can make them feel more uncertain about their place in physics altogether. But where do these stereotypes come from? While the media are often blamed for caricatured depictions of physicists, and sometimes rightly so, the persistence of the brilliance stereotype may be down to us.

In one recent study, undergraduate physics students in the UK – alongside their non-physics peers – described physics as requiring more intelligence and being harder than any other science subject. Perhaps we shouldn’t be surprised given that you need such good grades to study physics at university, which embeds the notion of physics as an elite subject and an option only for the very best.

It is not all of physics, however, that is seen as elite. A study carried out in 2020 found that some physics disciplines such as theoretical physics were seen as more difficult and therefore requiring more intelligence than other areas such as experimental physics. In the study, Master’s students linked intelligence with credibility and so physicists were not celebrated unless they were studying theoretical physics.

This hierarchy of intelligence makes it almost impossible to gain the prestigious “genius status”. Although often described as a “bad habit” of physicists, such comparisons can affect students, who are forced to defend themselves against the view of “being not smart enough” for topics such as theoretical physics.

But although we think physicists are highly intelligent, few would describe themselves in this way. Studies have found imposter syndrome is especially prevalent among women and ethnic-minority students in physics who do not fit the typical physics genius stereotype. In trying to attain such status, it has been suggested that “passion” is the key ingredient for being recognized as a “proper physicist”.

However, the ability to devote oneself to physics depends on long work hours, which not only leads to a poor work–life balance but is simply impossible for some people with a disability or caring responsibilities. While high intelligence, or genius status, is almost synonymous with physics – it is inaccessible to most.

Praising collaboration

But what does “being smart” even mean in physics? Physicists, and in particular theoretical physicists, are seen as geniuses because much of the cognitive work they do is hidden from view of both the public and students. To dispel the myth of the lone genius, a team of US researchers led by astronomer Mike Verostek from the University of Rochester sought to reveal these hidden cognitive processes – all of which gets missed when we praise intelligence.

The team showed that theorists use a myriad of skills and processes, such as using analogies and assumptions when carrying out a task. Intelligence also means problem-solving, picking up a new skill quickly, having the curiosity to try new ideas or the resourcefulness to reassess old problems. In focusing on skills and processes, we encourage the notion that physics is something you can get better at and is not determined by some fixed level of intelligence when you are born.

In another study, the same team found that collaboration is essential for generating ideas in theoretical physics. When we talk about a physics genius, we often separate the individual from the collective efforts that may have helped in their achievements. As physicists we know that the advancement of physics can only be achieved by communicating with others, having cross-cultural awareness and being able to work in a team.

Praising individual intelligence and celebrating individuals hides this collaboration from the public and recreates the view that physics is a lone endeavour. We must therefore celebrate the interpersonal skills that are required within physics.

So, next time you discuss the Nobel prize, the newest scientific discovery, or the award for best physics undergraduate – think about what you really want to praise and what message you are sending. Are you praising a skill or an effort, a group or an individual, and is what you’re celebrating achievable for anyone with the right support?

Expansion microscopy enables nanoimaging with a conventional microscope

Expansion microscopy is a biological imaging technique that enables nanoscale imaging using a conventional diffraction-limited fluorescence microscope. It works by embedding samples in a water-swellable hydrogel and then expanding the gel. This physically expands the biomolecules away from each other, enabling their interrogation at a resolution previously only achievable using expensive high-resolution imaging techniques.

Current expansion microscopy protocols, however, are not optimized for widespread adoption. Samples must be treated with custom anchoring agents to link specific biomolecules and labels to the hydrogel. In addition, most approaches have only achieved roughly four-fold tissue expansion, restricting the effective resolution to around 70 nm on a conventional optical microscope with a 280 nm diffraction-limited objective lens.

To overcome these shortcomings, a team headed up at Carnegie Mellon University has developed a novel expansion microscopy strategy called Magnify. The protocol, described in Nature Biotechnology, uses a new mechanically sturdy hydrogel that retains a spectrum of biomolecules without requiring a separate anchoring step.

Magnify can expand specimens by up to 11 times, enabling imaging of cells and tissues with an effective resolution of around 25 nm using a conventional microscope. When combined with super-resolution optical fluctuation imaging (SOFI, a computational post-processing method), it achieved an effective resolution of around 15 nm.

Previous expansion microscopy protocols also required the elimination of many biomolecules that hold tissues together. “To make cells really expandable, you need to use enzymes to digest proteins, so in the end, you had an empty gel with labels that indicate the location of the protein of interest,” explains senior author Yongxin Zhao in a press statement.

“One of the main selling points for Magnify is the universal strategy to keep the tissue’s biomolecules, including proteins, nucleic acids and carbohydrates, within the expanded sample. The molecules are kept intact, and multiple types of biomolecules can be labelled in a single sample,” Zhao adds.

Broad applications

Zhao and colleagues applied Magnify to a wide range of tissue types. Imaging an 11-fold expanded mouse brain section stained for total protein content, for example, enabled visualization of the nanoscopic architecture of individual synapses in the brain. Magnify demonstrated an effective resolving power of around 18 nm using a ×60 objective lens (around 200 nm diffraction limit).

The researchers confirmed the low distortion obtained by the Magnify protocol on several tissue types, using SOFI pre-expansion and confocal microscopy post-expansion. They found no substantial morphological changes between pre- and post-expansion images of cell nuclei and protein markers at either macroscopic or sub-diffraction levels.

Expansion microscopy of multiple tissue types

The team also tested Magnify on a range of formalin-fixed paraffin-embedded specimens – which are among the most important biopsy preparations, but are challenging to expand with current protocols. This included tissue sections from kidney, breast, brain and colon, and corresponding tumours. Magnify could expand the samples by factors of around 8.00–10.77 in water, depending on tissue type.

One key goal was to make Magnify suitable for a broad range of tissue specimens, easing its uptake by researchers looking to adopt the new protocol. “It works with different tissue types, fixation methods and even tissue that has been preserved and stored,” says co-first author Brendan Gallagher. “It is very flexible, in that you don’t necessarily need to redesign experiments with Magnify in mind completely; it will work with what you have already.”

Ramping the resolution

To demonstrate the further increase in effective resolution made possible by pairing Magnify with SOFI, the researchers used the combination to image human lung organoids, in particular, the cilia that function to clear mucus in the airway. At 200 nm in diameter and just a few micrometres in length, these structures are usually too small to see without using technology such as electron microscopy (EM).

Magnify–SOFI could fully resolve the hollow structure of cilia and basal bodies, including the outer ring previously shown by EM to comprise nine bundles of microtubules. The researchers estimated the effective resolution as around 14–17 nm (using a 280 nm diffraction-limited objective lens). They were also able to visualize defects in cilia in lung cells with genetic mutations.

“With the latest Magnify techniques, we can expand those lung tissues and start to see some ultrastructure of the motile cilia even with a regular microscope, and this will expedite both basic and clinical investigations,” comments co-author Xi Ren.

Building upon the successful development of Magnify, the team is now using it to study even more complex tissue samples. “This includes exploring infected tissues as well as larger specimens such as entire organs,” Zhao tells Physics World. “Moreover, we are working towards optimizing Magnify for investigating pathological human samples and studying nanoscale changes in the brain during learning processes and diseases. With these breakthroughs, further discoveries can be expected from this highly promising field of study.”

Laser sculpts a waveguide in campus corridor, the physics of how jazz gets its swing

An optical fibre is ideal for transmitting information over long distances because its optical properties ensure that light pulses remain inside the fibre, even if the fibre curves around a corner. However, it would sometimes be convenient to do optical communications over long distances without the hassle of using a fibre. Military communications and weapons guidance systems, for example, could benefit from sending data-encoded optical pulses through the air. The problem is that the pulses spread out laterally as they travel and may not have high enough intensities to be detected by the recipient.

Now, Howard Milchberg and colleagues at the University of Maryland have found a possible solution to this optical spread by firing a powerful laser 45 m along the corridor of a campus building. Their scheme involves firing a repeating cylindrical pattern of intense pulses along the corridor. The pulses heat the air that they travel through, dispersing the air and creating a region of lower density. The overall effect is to create a pipe of low density air that surrounds a core of unperturbed air at higher density.

This creates an optical waveguide that acts much like an optical fibre. To test its efficacy at transmitting information, the team fired much weaker light pulses through the core of the waveguide. They found that about 20% of the light that would otherwise be lost was transmitted over 45 m.

Blazing a kilometre-long path

Milchberg says that the experiment “blazes the path for even longer waveguides and many applications”. He adds, “Based on new lasers we are soon to get, we have the recipe to extend our guides to one kilometre and beyond”.

The research is described in a paper that has been accepted for publication in Physical Review X.

If there is one type of music that should defy description by physicists, jazz would be my candidate. The genre thrives on the improvisation and spontaneity of musicians, something that I assumed would be very difficult to describe using equations.

But the German physicist Theo Geisel has found otherwise in a study of how the members of jazz ensembles use tiny deviations in the relative timings of the notes they play. They found that these variations on the downbeat are responsible for “swing”, that essential yet intangible quality that the jazz bassist Christian McBride describes as a “feel”.

You can read more about the physics of jazz – and listen to McBride demonstrate swing – in this article on the NPR website, “What makes that song swing? At last, physicists unravel a jazz mystery”.

Celebrating the complexity Nobel prize with perspectives on the future of the field

 

The 2021 Nobel Prize for Physics was shared by Syukuro Manabe, Klaus Hasselmann and Giorgio Parisi “for groundbreaking contributions to our understanding of complex physical systems”. Now the Journal of Physics: Complexity has put together a special collection of open-access papers to celebrate this milestone in the history of this fascinating field.

The collection includes perspectives from 18 leaders in the field of complexity – scientists who make up the editorial board of the journal. The contributions examine several key issues for those studying complexity – including the definition of complex systems; the big challenges for the next two decades and the advantages and challenges of interdisciplinary research. The authors also consider the implications of the 2021 Nobel prize on the future of the field. This paper is called “Complex systems in the spotlight: next steps after the 2021 Nobel Prize in Physics”.

Advice for early-career researchers

Elsewhere in the collection an exclusive interview with Parisi that was done by JPhys Complexity editor-in-chief, Ginestra Bianconi of Queen Mary University of London. In the interview, Parisi talks about his work on complex systems; provides a perspective on the wider field; and gives advice for early-career researchers who are joining the community. You can watch highlights of the discussion in the above video and read the entire interview in “Thoughts on complex systems: an interview with Giorgio Parisi”.

The collection also includes an in-depth perspective about the work of Hasselmann that is written by Carlo Jaeger of the Global Climate Forum and Potsdam University in Germany. The article looks at Hasselman’s efforts at fostering creative cooperation between climate scientists and researchers from other fields, especially economics. It is called “Klaus Hasselmann and economics”.

 

Chemistries, materials, and processes for electrochemically mediated carbon capture

Want to learn more on this subject?

Carbon capture is considered a critical means for climate-change mitigation. Unfortunately, conventional thermochemical methods suffer from high energy consumption, motivating the search for more efficient carbon dioxide separation strategies driven by non-thermal stimuli.

In this webinar, Yayuan Liu will share the research efforts on developing materials and processes for electrochemically mediated carbon capture. First, she presents a library of electrochemically tunable Lewis bases with redox-active nitrogen centres that can reversibly capture and release carbon dioxide through a reduction-oxidation cycle. The mechanisms of the carbon capture process are elucidated via a combined experimental and computational approach. Yayuan will show that the properties of these Lewis base sorbents can be fine-tuned via rational molecular design and electrolyte engineering. She will then discuss challenges and opportunities for sorbent and electrochemical reactor designs toward practical carbon-capture processes driven by electrochemical stimuli.

An interactive Q&A session follows the presentation.

Want to learn more on this subject?

Liu Yayuan

Yayuan Liu joined Johns Hopkins University as an assistant professor in January 2022. Her research group works at the interface of chemical engineering, materials science, and electrochemistry to accelerate the realization of energy and environmental sustainability. She earned her BS in materials science and engineering in 2014 from Nanyang Technological University (Singapore) and her PhD in materials science and engineering in 2019 from Stanford University under the guidance of Prof. Yi Cui. She completed her postdoctoral training at the Massachusetts Institute of Technology, working with Prof. T Alan Hatton in the Department of Chemical Engineering. She has received multiple awards for her research, including the ECS Toyota Young Investigator Fellowship, American Chemical Society Division of Inorganic Chemistry Young Investigator Award, and Materials Research Society Graduate Student Gold Award. Prof. Liu was a Forbes 30 under 30 honoree in science.



National Ignition Facility’s ignition milestone sparks fresh push for laser fusion

For well over a decade, physicists at the Lawrence Livermore National Laboratory in California have been attempting to do something in the lab that had only ever previously occurred inside the warheads of hydrogen bombs. Their aim has been to use intense pulses of light from the world’s biggest laser – the $3.5bn National Ignition Facility (NIF) – to crush tiny capsules of hydrogen fuel such that the exceptional temperatures and pressures created therein yield energy-producing fusion reactions. Until the end of last year, a series of technical setbacks had prevented them from reaching their goal, known as ignition. But just after 1 a.m. on 5 December a larger-than-usual burst of neutrons in the detectors surrounding the laser’s focus signalled success – the reactions in this case having produced more than 1.5 times the energy they consumed.

The feat created headlines around the world and stimulated the imagination of the public, politicians and fusion experts alike. US energy secretary Jennifer Granholm hailed the “landmark achievement”, while Michael Campbell of the University of Rochester in the US described the result as a “Wright Brothers moment” for fusion research. For Steven Rose of Imperial College London, the announcement removes any lingering doubt that such high fusion energies are attainable. “If you don’t get an energy gain greater than one, people might claim you can never achieve it,” he says.

The result renewed optimism that fusion might finally enable a new source of clean, safe, secure and sustainable energy. Now, governments and especially private companies are looking to exploit the huge potential of fusion energy – with some firms even promising that they will deliver electricity to the grid from pilot power plants by early in the next decade.

Some scientists, however, reckon that such timescales are unrealistic, given the huge technical hurdles that remain on the road to fusion energy. Others contend that a 10–15-year time horizon is feasible, so long as researchers and their funders adopt the right mindset. For Troy Carter at the University of California, Los Angeles, this means ending reliance on large, expensive, centralized facilities such as the football-stadium sized NIF and turning instead to smaller, cheaper projects led by the more risk-tolerant private sector. “We have to change the way we do business,” he says.

Finally on target

Harnessing the energy given off when light nuclei fuse requires the nuclear fuel to be held in the form of a plasma at temperatures of around 100 million kelvin. One way of doing this is to confine the plasma in a magnetic field for fairly long periods of time while heating it with radio waves or particle beams. So far, such “magnetic confinement” has been physicists’ preferred route to fusion energy. This will be utilized in both the world’s priciest public and private reactors: the $20+bn ITER facility under construction in the south of France and a machine built by the company Commonwealth Fusion Systems outside Boston, US, which has so far raised at least $2bn in funding.

Rather than attempting to obtain a steady state, “inertial confinement” reactors operate somewhat like an internal combustion engine – generating energy through a repetitive cycle of explosions that fleetingly create enormous temperatures and pressures. NIF does this by amplifying and focusing 192 laser beams onto a tiny hollow metal cylinder at the centre of which is a peppercorn-sized capsule containing the hydrogen isotopes deuterium and tritium. X-rays generated from the walls of the cylinder blast off the outer surface of the capsule, forcing the rest of it inwards thanks to momentum conservation and causing the deuterium and tritium nuclei within it to fuse – in the process releasing alpha particles (helium nuclei), neutrons and lots of energy.

This process is extremely demanding, requiring exceptionally precise beam focusing and ultra-smooth capsules to ensure the near perfectly symmetrical implosions needed for fusion. Indeed, instabilities in the plasma created by the implosions and defects in the capsules, among other things, meant that the Livermore researchers fell well short of their initial target of ignition (or “breakeven”) by 2012. But through a series of painstaking measurements on successive laser shots they were able to gradually refine their experimental set-up and ultimately fire the historic shot – yielding 3.15 million joules (MJ) of fusion energy after delivering 2.05 MJ of laser energy to the target.

Omar Hurricane, chief scientist of Livermore’s inertial-confinement fusion programme, says that they now plan to “reprioritize” their work to push for higher, reproducible gains by boosting NIF’s laser energy in steps of about 0.2 MJ. They also intend to study the effect of varying the thickness of the nuclear fuel inside the capsules and reducing the size of the cylinder’s laser entrance holes. However, he points out that NIF was never designed to demonstrate practical fusion energy – given that the facility’s main purpose is providing experimental data to support the US’s (no longer tested) stockpile of nuclear weapons. As such, NIF is extremely inefficient – its 2 MJ flash-lamp pumped laser requiring around 400 MJ of electrical energy, which equates to a “wall-plug” efficiency of just 0.5%.

Riccardo Betti of the University of Rochester says that modern lasers pumped by diodes could reach efficiencies as high as 20% but points out that margins required for power plants (including energy lost during conversion of heat to electricity) means that even these devices will need target gains of “at least 50–100” (compared to NIF’s 1.5). They will also have to “fire” several times a second, while NIF only generates a shot about once a day. This high repetition rate would require mass-produced targets costing at most a few tens of cents, compared to the hundreds of thousands of dollars needed for those at NIF (which are made from gold and synthetic diamond).

Entering the market

One company that believes it can commercialize fusion energy despite all the hurdles is California-based firm Longview Fusion Energy Systems. Set up in 2021 by several former Livermore scientists, including ex-NIF director Edward Moses, Longview aims to combine NIF’s target design with diode-pumped solid-state lasers. The company announced its existence on the same day that Livermore reported NIF’s record-breaking shot, saying that it planned to start building a pilot power plant within the next five years.

Longview says that it intends to provide 50 MW of electricity to the grid by 2035 at the latest. The company acknowledges that this will not be easy, envisaging a laser efficiency and repetition rate of 18% and 10–20 Hz respectively. In particular, it says that while the necessary diodes already exist, they have “not yet been packaged into an integrated beamline for a fusion-scale laser”. But it remains confident that it can meet its deadline, noting that the laser is within a factor of two of the optics damage threshold needed for the pilot plant.

Not everyone is convinced. Stephen Bodner, previously head of the laser-fusion programme at the US Naval Research Laboratory in Washington DC, maintains that NIF’s “indirect-drive” technology wastes too much energy in generating X-rays (rather than illuminating fuel capsules directly). He is also sceptical of Longview’s claim that it can reduce the target cost to below $0.30 by spreading the considerable engineering and capital expenses over the 500 million targets it says it will need for its pilot plant. “There is no possible way for a fusion target like that used on NIF to ever be improved enough for commercial fusion energy,” he says.

Yet Longview is far from alone in believing that it has the technology at hand to bring fusion energy to the world. A report compiled last year by the Fusion Industry Association trade body lists 33 companies in the US and elsewhere as working on fusion technology – many of which also have aggressive timescales for developing power plants. One such company is First Light, based near Oxford, UK. Rather than using laser pulses to compress fuel capsules, First Light instead launches material projectiles – postage-stamp shaped pieces of metal – at extremely high speeds using the electromagnetic force provided by a huge bank of capacitors all discharging nearly instantaneously. The projectiles strike specially made targets, each of which directs and enhances the impact pressure on a fuel capsule embedded inside.

The company has so far raised some £80m in funding and demonstrated fusion using the largest pulsed-power facility in Europe. The next steps, according to co-founder and chief executive Nicholas Hawker, will be demonstrating ignition with a much bigger machine in around five years’ time and then a pilot plant in “the early- to mid-2030s”. Hawker admits that numerous challenges lie ahead – such as being able to load projectiles one after another and developing suitably robust high-voltage switches – but he is confident that the scheme’s physics is solid. “The fuel capsule is exactly the same as NIF’s so the recent result massively de-risks our system as well.” 

Cash needed

When it comes to physics, Betti reckons that inertial confinement fusion is better placed than magnetic confinement. While NIF has now demonstrated that the former can generate self-sustaining reactions, he argues that instabilities generated close to the ignition threshold mean there are still big uncertainties about whether tokamaks can follow suit. Nevertheless, he says that both forms of fusion must overcome formidable hurdles if they are to yield economically-competitive energy – including the demonstration of high gains from mass-produced targets when it comes to laser fusion. “I find it hard to believe that an energy system can be ready in 10 years,” he says.

NIF scientists did a superb job over the past decade solving some very difficult physics problems. They should be recognized for their great work

Stephen Bodner

Carter is more optimistic. He maintains that pilot plants could be realized in about a decade’s time, so long as private companies lead the charge in their construction while governments support more basic underpinning research such as that on radiation-resistant materials. But he cautions that the necessary funding will be considerable – about $500m extra per year in the case of the US government. If the money is forthcoming, he adds, full-scale commercial plants might then turn on “sooner than 2050”.

As to which technology will end up inside the plants, Bodner insists it will not be based on indirect drive. Most likely, he maintains, it will be inertial confinement based on a different kind of laser system such as argon-fluoride gas lasers. But he acknowledges that scaling up any system brings uncertainties. And he praises NIF scientists for getting fusion research to this point. “They did a superb job over the past decade solving some very difficult physics problems,” he says. “They should be recognized for their great work.”

Ask me anything: Jessica James – ‘I am never happier than with my head down doing maths and coding’

Jessica James

What skills do you use every day in your job?

I am glad I learned not to be afraid of coding when I was doing my PhD – many periods of my life have seen me buried in code and spreadsheets for hours on end. I think that my ability to be completely obsessive is extremely handy here and in other areas of work. My PhD also started my writing career. I have published several books and many articles on physics and finance, and the ability to write for a variety of reader types, from technical to layperson, is very much part of my life. Public speaking is another part of my world; I’ve given talks at many different levels and I love audience feedback. I mourn the growth of remote presenting when all you have is a screen to talk to, though its undoubted convenience means that I can reach a global audience without leaving my home. Leadership and teamwork are likewise so important; I’ve always been able to switch between “team mode” and “manager mode” pretty easily, and many of these skills began to be acquired during my PhD. Finally, science has a strong code of ethics and integrity that has transferred to my finance career.

What do you like best and least about your job?

What is best is the excitement of discovering new things and I have found areas of finance and financial mathematics that other people did not even know existed. I am never happier than with my head down doing some maths or coding, and then finding that the clock has moved on many hours while I was working. I love being in a team and engaging with clever, interesting people. Also having writing and presenting as part of my job is a real pleasure. My least favourite thing is the time pressure – long hours, stress, demands on my time for crazy things. And I resent having to do my travel expenses when there’s a juicy problem to work on.

What do you know today that you wish you knew when you were starting out in your career?

A view on the future of the markets would have been useful. But I think it’s so important to value your own time and set limits and boundaries. We all need a balance in our lives and it’s so easy to get subsumed in work. Apart from that, there are two things I always say to people starting off in my industry: never be embarrassed about saying “I don’t understand” and never be afraid of saying “I made a mistake”. If folk could say these things more easily then a lot of the problems in the finance industry might never have happened. Personally, these days I love to say “I don’t know” – because then there is an opportunity to learn.

Patient positioning chair paves the way for upright radiotherapy

Upright radiotherapy

Cancer patients typically lie in a supine position (on their back) during radiotherapy. But for some malignancies, including thoracic, pelvic and head-and-neck tumours, upright body positioning may improve treatment delivery and possibly patient outcome. Upright treatments could increase tumour exposure, reduce radiation dose to adjacent healthy tissue and make breath holds easier for some patients.

To perform upright radiotherapy safely, however, patient immobilization is critical. With this in mind, researchers at the Centre Léon Bérard in France evaluated a patient positioning system currently in commercial development by Leo Cancer Care. The team assessed the immobilization accuracy, set-up time and comfort of the system for 16 patients undergoing radiotherapy for pelvic cancer (prostate, bladder, rectal, endometrial and cervix/uterine tumours).

Findings of the pilot study, reported in Technical Innovations & Patient Support in Radiation Oncology, are encouraging. Initial patient set-up took 4 to 6 min when performed by two radiation therapy technologists working together, and subsequent positionings took between 2 and 5 min. Inter-fraction repositioning was achieved with below 1 mm accuracy on average, and intra-fraction motion over 20 min was within 3 mm for more than 90% of patients. Most patients reported that upright positioning was as good as, and in some cases better than, the supine position that they had to maintain during their standard radiation treatment.

The positioning system (known as “the chair”) is designed to place the patient in appropriate postures according to the cancer type being treated. For prostate and pelvic treatments, patients are perched on the chair, supported by the back of the thigh and a knee rest. Patients are seated vertically for head-and-neck treatments, seated leaning slightly backward for lung and liver radiotherapy, and slightly forward for breast radiotherapy.

The chair itself comprises a seat, a backrest with arm support, a shin rest and a heel stop, all of which adjust to different positions and angles. The chair can rotate at a speed of one rotation per minute, and can simultaneously move in the cranio-caudal direction (vertically in this set-up) by up to 70 cm, allowing the generation of helical movement.

The system incorporates an optical guidance and tracking system, comprising up to five high-resolution cameras. In this study, each patient had their own custom-moulded vacuum cushion and a belt was positioned on the upper part of their abdomen.

For the study, participants undergoing conventional radiotherapy had three additional appointments during their scheduled treatment course to test the upright positioning device. Patients were repositioned at the second and third appointments and the researchers verified the repositioning accuracy using the optical image reference system. They note that image registration was performed using the skin surface, with no skin tattoos or landmarks needed. After being accurately positioned, patients underwent a simulated treatment session with several helical movements lasting 20 min.

Principal investigator Vincent Grégoire and his colleague Sophie Boisbouvier calculated the inter-fraction position shifts after manual registration between reference images and images taken during the repositioning. They report that the chair provided accurate repositioning, with average inter-fraction shifts of –0.5, –0.4 and –0.9 mm in the x-, y– and z-directions, respectively.

The researchers also monitored intra-fraction motion during the chair movements, performing positioning checks every 4 min. After 20 min, the mean intra-fraction shifts were 0.0, 0.2 and 0.0 mm in the x-, y– and z-directions, respectively. Only 10% of patients had inter-fraction shifts exceeding 3 mm and intra-fraction motion exceeding 2 mm. The majority of patients reported that that they were more comfortable in the upright versus the supine position. All patients said that they could breathe comfortably when upright.

The study revealed some modifications required for the chair, including redesigning the belt to improve patient comfort and improvements regarding head positioning. The researchers intend to investigate a new backrest that has been optimally designed to position the head and neck. They also plan to conduct similar assessments of patient immobilization for head-and-neck, lung, breast and upper-abdomen tumours.

Grégoire advises that the team plan to compare upright with supine positioning in terms of internal positioning and motion for the tumour types they have investigated. They also will perform in silico dose distribution comparisons between patients in supine and upright positioning, for both photons and protons. They also hope to estimate potential gains in terms of normal tissue complication probability (NTCP) and tumour cure probability (TCP).

The Centre Léon Bérard is a research partner of Leo Cancer Care, which is developing a range of upright radiotherapy products. In addition to the positioning system, these include a vertical diagnostic CT scanner and a 6 MV horizontal-beam linear accelerator to deliver rotational image-guided intensity-modulated radiotherapy. After French government regulatory authorities authorize the importing of the CT scanner, which does not yet have a CE Mark, the team plans to include vertical CT imaging in future research.

Copyright © 2025 by IOP Publishing Ltd and individual contributors