Skip to main content

Academic paediatric facilities deliver lower dose during CT exams

Monitoring the X-ray dose delivered to patients during diagnostic radiology procedures is an important precursor step in improving patient care. Keith Strauss and colleagues from Cincinnati Children’s Hospital Medical Center compared the dose delivered to children in CT examinations and found statistically significant variation in the mean dose delivered across different types of institutions.

Strauss and colleagues pooled data from 239,622 paediatric CT examinations in the Dose Index Registry from 519 medical imaging centres. They statistically tested the hypothesis that academic paediatric CT providers use diagnostic imaging protocols specifically designed for children that deliver lower radiation doses. The researchers concluded that academic paediatric centres delivered lower radiation dose across all brain examinations, and the majority of chest and pelvis examinations, compared with alternative medical imaging centres (Radiology 10.1148/radiol.2019181753).

The researchers compared patient size-adjusted dose indices across four categories of diagnostic CT providers: academic paediatric, academic adult, non-academic paediatric and non-academic adult. Imaging centres with a medical school affiliation were designated as academic; those listed by the Children’s Hospital Association as paediatric were classified as paediatric facilities.

The team chose to compare three recognised dose indices: the volume CT dose index, the size-specific dose estimate and the dose–length product. To adjust for patient size, the researchers used the patient effective diameter derived from localizer scans, which are used to check patient positioning prior to CT examination. They distributed the data into six patient size categories for chest and pelvic examinations, and five size groups for brain examinations.

Strauss and colleagues used the mean dose index from academic paediatric facilities as a benchmark and compared this value with the mean dose indices from the three remaining categories, using the unequal variance two sample t-test. Since multiple comparisons were made, the researchers also adjusted the confidence level using the Bonferroni correction.

The results showed that across all six patient size groups, and for all three dose metrics, academic paediatric facilities delivered lower mean dose than adult academic, adult non-academic and paediatric non-academic facilities, in 78% of paediatric chest exam comparisons and 89% of comparisons between pelvic exams. For paediatric brain imaging studies, the academic paediatric facilities delivered lower mean dose in all comparisons.

The researchers also compared the variance in the three radiation dose metrics across the different providers. Again, they found that the academic paediatric facilities delivered a less variable dose in the majority of comparisons.

This study highlights the role that effective dose monitoring may have on patient health and outcomes in diagnostic radiology. “Dose management is important for all patients, both paediatric and adult patients. This begins with patient dose monitoring. However, the monitoring step by itself does nothing to improve patient care. Additional steps must be taken to carefully manage the CT radiation of all patients to have a positive impact on patient care,” stresses Strauss.

Twistronics lights up with moiré exciton experiments

For the first announcement of “magic-angle” bilayer graphene’s newly discovered twist-tunable electronic properties last year, APS convened a special session in the atrium of the conference venue to accommodate the throngs of attendees eager to hear the details. Editors at Nature may have also considered convening a special section of their journal this week to accommodate the flash flood of announcements reporting evidence of a new kind of quantum optical behaviour in similar twist-tunable stacks of other types of 2D materials.

“We were actually surprised that we didn’t see the effects sooner,” says Xiaodong Xu, principle investigator at the Nanoscale Optoelectronics Laboratory at the University of Washington in the US, and a corresponding author on one of this week’s papers. No fewer than three papers reported experimental results indicating the presence of interlayer excitons trapped in the periodically dappled moiré potential field that results when two atomically thick layers of transition metal dichalcogenide (TMDs) are misaligned. Yet while Xu suggests the discoveries were in some ways due, he highlights how specific the conditions for observing interlayer trapped moiré excitons are, which may explain why they have only just been observed.

“It turns out the moiré effects can be obscured somewhat by an imperfect moiré pattern, which will lead to light emission similar to defects, and excess laser excitation power, which will provide a broad background,” Xu tells Physics World. “In fact, we have seen effects of moiré excitons several years back, but we just did not know what we were looking at and what the evidence for moiré excitons should be.  Once we started to excite the system with lower laser power and perform magneto-optical spectroscopy of samples with different twist angles, we began to realize that moiré-trapped excitons have been there all the time!”

The results highlight the “feasibility of engineering artificial excitonic crystals using van der Waals heterostructures for nanophotonics and quantum information applications” as  Fengcheng Wu at Argonne National Laboratory, Xiaoqin Li at the University of Texas at Austin and their colleagues propose in their report. In addition the work has fundamental significance, providing “an attractive platform from which to explore and control excited states of matter, such as topological excitons and a correlated exciton Hubbard model, in transition metal dichalcogenides,” according to the report by Feng Wang at the University of California, Berkeley, and co-authors.

Excitons meet 2D materials

An exciton is the quantum particle that arises as an electron couples to the quantum “hole” that is left behind when an electron is excited out of its band. They have been studied in many systems including 2D materials and TMDs. In fact the past decade has seen a surge in general in studies of TMDs – chemicals with the formula MX2 where M is a transition metal, for example molybdenum or tungsten, and X is a chalcogen such as sulphur, selenium or tellurium – particularly in the form of atomically thin 2D materials where a single layer of the metal atoms is sandwiched between two layers of the chalcogen.

Xu’s group, is one of several that have studied the optical response of monolayer TMDs for many years. He describes how a monolayer TMD behaves as a quantum well. However in addition to spin, charge carriers in TMDs have an additional “valley” degree of freedom leading to quantum wells with coupled spin-valley physics. As a result, as far back as 2013, Xu’s group started pushing the idea of stacking two monolayers on top of each other, to mimic the double quantum well structures in other materials such as III-V semiconductors like GaAs/AlGaAs. In addition, they expected the atomically thin nature of these structures would make them very tunable.

“In our previous work, we realized that the system is a very exciting platform for manipulating excitons with long life time and spin-valley degrees of freedom,” say Xu. “In developing the understanding of our experimental results, our theory collaborator Wang Yao and his postdoc Hongyi Yu at University of Hongkong realized that there exists very interesting moiré exciton physics in the twisted heterobilayer system. Since then, we have been working on the experimental realization of these moiré effects.”

moire excitons

Different ways to peel an orange

Although all three papers report moiré excitons in 2D TMDs, there are some important distinctions in what each one demonstrates. Wu, Li and colleagues studied the response of MoSe2/WSe2 heterobilayers twisted at an angle of 1° and encapsulated in hexagonal boron nitride. Their photoluminescence studies with circularly polarized light revealed peaks in the spectra indicative of excitons at four distinct energies. Since the exciton “Bohr radius” – an analogue of an atomic radius – is much smaller than the period of the moiré pattern, they attribute the four quantized energy levels to lateral confinement imposed by the moiré potential. As further support for this model, when the twist angle is increased to 2° and the moiré pattern changes, the spacing of their exciton peaks also change and increase, although at significantly larger angles they disappear altogether. Their model also suggests an explanation for the co- and cross-circular emission from their structures.

The heterostructures in the work reported by Wang and colleagues – some of whom are also collaborators with Wu, Li and colleagues – are WSe2/WS2 encapsulated in hexagonal boron nitride. These experiments include few layer graphene contacts to observe the effects of tuning the carrier doping with an applied electric field. They observe emission peaks at three slightly higher energies that weaken and also eventually disappear as the angle between the layers is increased beyond 3°. They note that exciting the sample at any of these three emission peak energies leads to strong enhancement of the interlayer exciton emission at 1.409 eV, which they suggest indicates that the three peaks arise from the strongly coupled WSe2/WS2 heterostructure rather than from several separated domains. In addition, they observe strong blue shifts in the exciton energies as doping is increased, which affects all the peaks similarly. This effect is not observed in monolayers of the materials and cannot be explained by established electron–exciton interactions in monolayers.

Applying a magnetic field reveals yet more nuances in the behaviour of the moiré excitons as Xu and colleagues report in their study of MoSe2/WSe2 superlattices. They find equal and opposing shifts for cross and co-circularly polarized light depending on the magnetic field. They demonstrated that, these slopes, representing the g factors (a dimensionless magnetic moment), are determined by valley index pairing. They also show that the twist in the lattice can compensate for momentum mismatches in the excitons so that they radiatively recombine (known as Umklapp recombination). “We show that we can use g-factors  as a fingerprint to identify moiré excitons,” Xu adds. “This approach should be applicable to understand other optical responses.”

Drawing on diverse expertise

So what brought all three reports at once? Xu suggests that advances in theory, greater availability of high-quality heterobilayer samples, and the mounting interest in moiré effects have all played a role. He highlights some of the factors that have made a difference for the Xu Lab in particular, such as the access to clean 3D bulk crystals provided by their long-term collaborators Jiaqiang Yan and David Mandrus at Oakridge National Laboratory, expertise acquired in the fabrication of structures with fine twist control as well as light emission polarization and g-factors measurements, and crucially the theoretical support from Yao to understand the measurements.

“We believe that the basic results are quite experimentally achievable in many other groups; it’s just a matter of making clean-enough samples with different twist angles and using the right experimental conditions (low excitation power, low temperature, magnetic field control, etc),” says Xu. He adds that the diverse expertise within this sector of the research community as highlighted in the differences in the reports this week, can help towards a deeper understanding of moiré excitons for both fundamental physics and potential applications. “It will be important for future progress in this new field to have productive cross-talk or collaboration between our groups and the community as a whole.”

Full details are reported in Nature at DOI: 10.1038/s41586-019-0975-z; DOI: 10.1038/s41586-019-0976-y; and DOI: 10.1038/s41586-019-0957-1

Physics World 30th anniversary podcast series – high-temperature superconductivity

Physics World has recently turned 30 and we are celebrating with a five-part podcast series exploring key areas of physics. This fourth episode in the series explores how high-temperature superconductivity research has evolved over the past three decades since the phenomenon was first observed.

In the late 1980s there was a lot of hype surrounding these materials because of the many exciting applications that would follow. Among the promised spin-offs were lossless transmission lines, lossless magnetism and levitating trains. All of these applications have been demonstrated to some extent but it is also fair to say that high-temperature superconductors are not as ubiquitous as some had hoped.

In this podcast, Andrew Glester picks up the story to find out more about the history of high-temperature superconductivity and its prospects for the future. He catches up with the physicists Elizabeth Blackburn from Lund University in Sweden and Stephen Hayden from the University of Bristol, UK.

If you enjoy the podcast, then take a listen to the first three podcasts in the 30th anniversary series. Glester began in October by looking at the past and future of particle physics before tackling gravitational waves in November and then nuclear fusion in January. Don’t forget you can also subscribe to Physics World Stories via Apple podcasts or your chosen podcast host.

Energy saving – and use – without end

Amory Lovins from the Rocky Mountain Institute, US, says the cost of energy efficiency will fallnot rise — with wider use if we adopt integrated system design. That’s a divergence from the traditional view that, once we’ve exploited the easy “low-hanging fruit” energy-saving options, it will get harder and more expensive to make more savings.

It may be true that as energy-saving measures are rolled out widely, the technology will get cheaper due to economies of production volumes and learning-curve improvements. But in his recent paper, Lovins goes further: integrated design offers large additional cost savings. That’s since we are moving from energy-efficiency upgrades — add-ons to basically unchanged systems — to complete new system designs.

Economic theory cannot reveal whether efficiency’s ‘low-hanging fruit’ — a misnomer for eye-level fruit — will dwindle or grow back faster than it is harvested, but experience so far strongly suggests the latter

Amory Lovins

Lovins offers examples from the building sector, with big savings possible via designs incorporating full insulation that eliminate the need for heating systems, as at the Rocky Mountain Institute’s HQ. Eliminating or downsizing heating and cooling needs is obviously good news, assuming it does not cost too much, which is what Lovins claims. He offers similar gains from system design approaches in industry; energy use can be reduced by clever design and new tech. This seems fair enough — you can squeeze out energy use and cut costs. But can that process be continued repeatedly? Aren’t there final limits? Lovins seems to think not. At least, not until we get to zero energy use, or low residual energy use based on renewables.

Lovins cites the US energy-intensity data, noting the continually falling level of energy/GDP. “US primary energy intensity has more than halved as controversially foreseen in 1976, but another threefold drop is now in view and keeps getting bigger and cheaper,” he says. Lovins believes similar trends are possible in China and Europe.

Post-scarcity economics

This may all seem very optimistic but Lovins is upbeat, even utopian. “Today’s efficiency-and-renewables revolution is not only a convergence of technology plus design plus information technology,” he says. “It reflects no less than the emergence of a new economic model. Today’s energy transition exhibits not the Ricardian economics of scarcity, like diminishing returns to farmland and minerals, but the complementary modern economics of abundance, with expanding returns. These flow from mass manufacturing of fast granular technologies with rapid learning, network effects, and mutually reinforcing innovations. With those new driving forces, today’s emergent paradigm for profitable climate stabilization envisions an energy-and-land-use transformation not slowed by incumbents’ inertias but sped by insurgents’ ambitions.”

I’m reminded of Murray Bookchin’s classic anarchist work in the early 1970s on post-scarcity economics. But also, less palatably, of Simon and Kahn’s hyper-optimistic 1980s hi-tech and innovation-led Cornucopia. There again, let’s not knock optimism too much. Launching the International Renewable Energy Agency (IRENA) publication A New World: the Geopolitics of the Energy Transformation, IRENA director-general Adnan Amin said that the transition to renewables and away from fossil fuels is “a move away from the politics of scarcity and conflict to abundance and peace with new opportunities for many countries”. In broad terms, that’s hard to gainsay.

On the issue of efficiency, Lovins writes in Physics World sister publication Environmental Research Letters (ERL) that “Economic theory cannot reveal whether efficiency’s ‘low-hanging fruit’ — a misnomer for eye-level fruit — will dwindle or grow back faster than it is harvested, but experience so far strongly suggests the latter. For example, after decades’ effort, the real costs of Pacific Northwest electric savings have nearly halved while their quantity tripled since the 1990s.”

Looking more broadly, Lovins cites the global low energy demand scenario produced by Grübler et al. Compared with more conventional scenarios, Lovins says, this “enables 80%-renewable 2050 supply and more-granular, faster-deployable scale, needs several-fold lower supply-side investment and far less policy dependence, leaves an ample 50% ‘safety margin’ in demand, yields major positive externalities, and needs no negative emissions technologies”.

Cutting energy demand

The study by Grübler et al. is certainly interesting. As I noted in an earlier post, it claims that it is possible to reduce global energy demand to 245 EJ by 2020, around 40% lower than today, despite rises in population, income and activity. It looks to an energy services approach and to the widespread use of digital systems to improve efficiency, enable integration and meet end-use energy requirements interactively.

That’s also a feature of the essays on A Distributed Energy Future for the UK published by the Institute for Public Policy Research on what can be done to shift to decentralized power, which I mentioned in my last post. The essay series focused on “prosumer” initiatives, aided by digital system integration. Looking more broadly, Blueprint for a Post-Carbon Society, a study by Imperial College London and Bristol-based energy supply company OVO Energy, has calculated that in a high UK renewables scenario the use of residential flexible technologies such as smart electric vehicle (EV) charging, smart electric heating and in-home battery storage could save the UK energy system £6.9bn.

The savings were calculated as £1.1bn from smart EV charging, £3.5bn from vehicle-to-grid (V2G) EV charging, £3.9bn from smart heating systems and £2.9bn from in-home batteries. That would be equivalent to a £256 saving on the average household energy bill each year. The scenario relies on the uptake of 25 million EVs and 21 million electric heating units by 2040, which the report says is “ambitious but achievable”.

“Flexible storage, located near consumption and found in EVs, smart electric heating and home energy storage devices offers a perfect solution to ease grid capacity issues and will limit the need for expensive grid upgrades and reinforcements,” the report adds. “The energy storage found in these behind-the-meter (BTM) devices can act like an energy reservoir, soaking up cheaper renewable power that can then be used when required or released back into the grid at times of peak demand.” The report suggests that smart electric heat can provide enough flexibility to enable green generation from wind and solar alone, displacing the need for nuclear and carbon capture and storage.

It’s all visionary stuff. Though given the problems that have faced the UK smart meter programme, some of these visions may be a tad optimistic about how easy it will be to optimize energy use, and energy savings, across millions of homes and other locations, with EVs added to the mix. But we certainly should try. And energy efficiency, aided by smart system management, is clearly a key part, making it easier for renewables, large and small, to supply the reduced demand. Although this all seems to be about electricity — as was the case in the DNV-GL study I looked at in my previous post. Electricity is certainly getting pushed hard as the best decarbonization route in every sector. But in my next post I ask if the future must be all-electric? There are other options for energy supply and use, and for storage and system balancing.

Focused ultrasound releases cancer drugs on target

TARDOX trial team

The ability of focused ultrasound to heat tissue enables a range of non-invasive therapies, such as tumour ablation, for example, or relief of essential tremor. Another application under investigation is the use of ultrasound to induce mild hyperthermia (heating by no more than 6°C) and trigger drug release from thermosensitive liposomal carriers, enabling targeted drug delivery to tumours.

There are, however, challenges in translating ultrasound-induced drug release from small-animal studies to clinical use, including identification of a suitable technique for monitoring the heating process. MRI thermometry can measure temperature in real time, but its high costs limit widespread application. Implanted temperature sensors, meanwhile, are necessarily invasive and may increase risk to the patient, as well as limiting which patients can be monitored.

Now, a team from the University of Oxford has investigated the use of computational planning models to determine the focused ultrasound parameters required to release drugs from liposomes, without the need for real-time thermometry. Such an approach could make ultrasound-mediated targeted drug delivery more widely accessible and simpler to administer than techniques that employ MRI guidance (Radiology 10.1148/radiol.2018181445).

Michael Gray

“A key objective in developing the planning model was to not only validate the safety and feasibility of non-invasively triggered drug delivery in oncology, but to do so in a way that would enable large-scale adoption and deployment of the technique if successful,” explains first author Michael Gray. “We hypothesized that neither expensive nor invasive thermometry would be necessary for mild hyperthermia-based treatments employing non-invasive ultrasound for targeted heating.”

Patient-specific models

The study, part of the TARDOX trial, included 10 participants with liver tumours who were treated using focused ultrasound to release the cancer drug doxorubicin from liposomes. For the first six patients (group 1), the researchers used an implanted sensor to monitor temperature during ultrasonic heating of target tumours. They treated the other four patients without real-time thermometry, in a step towards fully non-invasive therapy.

The treatment concept

Gray and colleagues developed a model to create focused ultrasound treatment plans, using a combination of participant data and finite element calculations. For each patient, the model recommends treatment parameters and predicts the resulting temperature fields.

“Patient images were used to create a combined acoustic/thermal anatomical model of the patient and target tumour,” explains Gray. “Based on segmentation of CT and MRI data, each tissue type and tissue layer was assigned specific acoustic and thermal properties to enable predictive modelling of acoustic propagation and ultrasound-mediated hyperthermia.”

The calculated treatment parameters showed that the prescribed power scaled approximately with target depth. For seven patients in whom model predictions were available at the time of treatment, the differences between predicted and implemented powers (mean of 3.5 W) were not clinically significant relative to the power used (mean of 64 W). These results indicate that the model consistently provides settings that are safe and agree with those chosen in the presence of thermometry.

The team also retrospectively created models for the first three participants, who were treated before model availability. The largest prediction discrepancy was seen for a patient with a target nearly 5 cm deeper than any others. On the basis of thermometry, this treatment was substantially  underpowered. The researchers note that using model-predicted settings (had they been available) should have improved target heating.

For the three group 1 participants with treatment volumes of 52 cm3 or less (about the size of a golf ball), measured treatment-averaged temperatures were within 0.1–0.3°C of the model predictions. For participants with larger tumours, the prediction was 1.4–1.7°C below the measured value, suggesting that the small sensor used was not a reliable indicator of the median temperature of larger targets.

Treatment response

To assess therapeutic response, the team imaged all participants before and after treatment using contrast-enhanced MRI and CT, and FDG-PET/CT. In the first patient, PET revealed a 36.4% reduction in total lesion glycolysis of the target tumour, whereas there was no substantial response in a similarly sized tumour that received drug but no focused ultrasound. In six of the 10 participants, partial responses were seen after just one treatment cycle.

PET/CT images

Histologic and MRI data showed no evidence of thermal tissue ablation, confirming the safety of this procedure. There were no skin burns, off-target tissue damage, or other clinically significant adverse effects related to focused ultrasound. The researchers also found that the model-prescribed treatments resulted in similar levels of enhanced drug delivery with or without real-time thermometry.

The team concluded that the study supports the feasibility and safety of using planning models to define treatment parameters for targeted hyperthermic drug delivery to liver tumours without real-time thermometry.

“We are currently working on several additional clinical applications for the techniques developed in TARDOX, and hope to begin another Phase 1 trial in another indication in the next 12 months,” Gray tells Physics World. “We are also exploring ultrasound-enhanced drug delivery using cavitational rather than thermal ultrasound mechanisms and will be starting a first-in-man study of this approach over the course of 2020.”

Shedding light on XHV-capable materials

Less, it seems, is always more in the rarefied world of extreme-high-vacuum (XHV) systems. Operating at pressures of 10–10 Pa and lower, XHV is a core enabling technology of many big-science programmes – think the Large Hadron Collider at CERN or the LIGO gravitational wave observatory. At the other end of the scale, XHV underpins all manner of small-science endeavours – from R&D on quantum computing to the fabrication of next-generation semiconductor chips. Despite such versatility, significant gaps remain when it comes to understanding, comparing and benchmarking the technical specifications and performance of XHV chambers from different manufacturers.

Fundamental to the successful operation of any XHV system is a chamber with ultralow outgassing rates, such that gases (typically hydrogen) dissolved in the bulk of the chamber material are removed or prevented from leaving the material surface. Trouble is, commercial vendors rarely report or specify outgassing rates for their XHV chambers. When they do, it is often tricky to compare the experimental data because different studies use different chamber geometries, environments, and sometimes poorly defined or poorly implemented measurement techniques.

In short, there is no industry consensus on the optimum manufacturing route – in terms of material composition, chamber geometry, and heat and surface treatment – to deliver XHV systems with ultralow outgassing rates.

Cooperate to accumulate

That could be about to change, however, thanks to a collaboration between Anderson Dahlen –  Applied Vacuum Division, a specialist US supplier of ultrahigh-vacuum (UHV) and XHV systems to research and industry, and scientists at the National Institute of Standards and Technology (NIST), the US national measurement laboratory. Their work-in-progress study, formalized under a US government Cooperative Research and Development Agreement (CRADA), aims to evaluate the effectiveness of a range of materials and processing options in achieving ultralow outgassing rates in XHV chambers.

“Obtaining really low vacuum is a fight between pumping, or our ability to remove gas from a vacuum system, versus outgassing from the materials in the vacuum chamber or from the vacuum chamber itself,” explains Jim Fedchak, who heads up the outgassing studies within NIST’s Physical Measurement Laboratory. “Getting chambers made from ultralow outgassing materials is critical for scientists and industry engineers who depend on UHV or XHV environments.”

What’s more, the benefits are not just restricted to technical performance. For large vacuum systems, low outgassing rates mean that fewer pumps are required, with the potential to yield big savings on upfront capital outlay and ongoing operational expenditure.

All of which equates to significant commercial differentiation if you happen to be a supplier of XHV technology, claims Ben Bowers, regional sales manager for Anderson Dahlen – Applied Vacuum Division. “Despite the fact that we have many very happy repeat customers for our XHV products, they don’t publish data about the outgassing performance of their systems. We’re looking for independent validation of our XHV credentials, so who better to work with on that than NIST.” 

Standardize and compare

As the industry partner in the CRADA, Anderson Dahlen supplied NIST with seven identical UHV or XHV-specified test chambers – all of which have the same size and geometry. Using chambers of the same geometry enables a better comparison of outgassing rates, which are measured using spinning rotor gauges in a custom, computer-controlled manifold that was developed by NIST to enable temperature-dependent studies of outgassing.

Although their geometry is uniform, the test chambers are constructed from five different metals: titanium, aluminium, 304L stainless steel, 316L stainless steel and 316LN electroslag remelt (ESR) stainless steel (a high-specification steel that’s refined to remove impurities). “The most commonly used material for vacuum chambers is 304L stainless steel, which is also one of the most commonly used stainless steels,” explains Fedchak. “For XHV, we either require special treatments of the stainless steel or a different material altogether.”

XHV chambers from Anderson Dahlen – Applied Vacuum Division

With this in mind, Anderson Dahlen handed over five of the test chambers with no heat/surface treatment prior to outgassing evaluation at NIST, while the two additional chambers – one made from 316L stainless steel, the other from 316LN-ESR – were vacuum-fired at temperatures above 950 °C.

“We heat the chambers north of 950 °C in a vacuum for an extended timeframe – basically driving the hydrogen out via heat followed by a controlled cool-down,” says Bowers.

While the outgassing studies at NIST are still ongoing, several trends are emerging. Early results indicate that aluminium, titanium and vacuum-fired 316L stainless steel all offer XHV levels of outgassing, reducing outgassing by potentially significant amounts compared to standard 304L stainless steel. Perhaps more surprisingly, the vacuum-fired ESR steel — currently the XHV material of choice for several big-science projects — appears to offer no outgassing advantage over 316L stainless steel.

Once confirmed, these findings will be detailed in upcoming journal publications. As a next step, the team plans to electropolish and air-bake the two XHV-processed chambers to see how these additional treatments affect outgassing behavior.

“This research breaks new ground,” claims Bowers. “No one has ever done this kind of comparative outgassing study on all of these materials before under standardized conditions.”

Bowers adds: “We were willing to expose ourselves here – and specifically our belief that stainless steel is just as good as titanium and aluminium for XHV applications. Commercially, we’re looking forward to getting the proof out there when these results are published formally in a scientific journal.”

Meanwhile, Fedchak points out that the outgassing studies are yielding pay-offs for NIST’s wider standards effort in the XHV regime. “NIST is interested in creating vacuum pressure standards that operate in the UHV and XHV,” he explains. “We are currently creating the cold-atom vacuum standard (CAVS), which will be a both a primary standard and sensor operating in the UHV and XHV. Materials with the best outgassing rates are excellent candidates to be used for the CAVS.”

He concludes: “The portable version of CAVS will provide a ‘drop-in’ substitute for existing vacuum gauges, allowing accurate measurement of vacuum even at the lowest levels—levels which are becoming more and more important in areas such as quantum information science.”

 

Decisions, decisions: which XHV material?

Ben Bowers, regional sales manager for Anderson Dahlen – Applied Vacuum Division, talked to Physics World about the commercial pros, cons and trade-offs associated with the various material options for UHV and XHV applications. Here is his summary take:

  • 316L stainless steel is easily acquired in all sizes of sheet metal, plate and bar. Most manufacturing companies are tooled to machine 316L, which is also a very “weldable” material for UHV/XHV applications. Price-wise, 316L is on a par with aluminium and less expensive versus titanium.
  • Aluminium is difficult to weld for UHV/XHV applications owing to the large heat zone during welding. Aluminium also requires the use of bimetal flanges (i.e. explosion-bonded aluminium to stainless-steel or titanium), mainly because aluminium knife-edges are soft and will deform or fail to seal properly as users open and close the vacuum chamber. However, aluminium is a magnetically inert material – a critical feature for some big-science experiments.
  • Titanium is a lot more expensive when compared with 316L and aluminium. It’s also harder to machine and not as easy to acquire in the same assortment of material sizes. Furthermore, welding titanium for UHV/XHV applications requires a completely oxygen-free environment – which means that the manufacturer either needs a glovebox or heavy inert-gas purge of the material while welding. An added complication is titanium’s coefficient of thermal expansion, which is nearly half that of stainless steel. This means there are potential sealing issues when instruments are attached to the vacuum chamber with stainless-steel flanges – such as when the chamber is baked during operation. As per aluminium, titanium is better than stainless steel if users need a magnetically inert material.

“Ultimately,” Bowers concludes, “it’s all about customer choice. Tell us which route you’d like to go and we’ll build it.”

X-rays suggest lower-mantle magma could be stabilized by heavy elements

A new method for studying high-pressure samples has been developed by researchers in Germany. Their approach involves doing X-ray emission spectrography at a synchrotron facility and supports the idea that magma in the Earth’s lower mantle is stabilized by the accumulation of heavy elements.

Magma rises in Earth’s crust because it is less dense than surrounding material of the same composition. However, magma in the Earth’s lower mantle appears more stable, suggesting there it has a similar density to its surroundings. It has been proposed, therefore, that either magma in the mantle is enriched by heavy elements such as iron, or that at extreme pressures a special compaction mechanism increases magma density.

To investigate how materials behave at mantle depths, researchers create extreme pressures by compressing samples in a diamond anvil. X-rays – energetic enough to pass through the sample and short enough in wavelength to resolve atomic-scale details – are then used to determine the sample’s structure. Two such methods are traditionally used in high-pressure research, with one based on the absorption of X-rays and the other on their diffraction as they pass through the sample.

Energy and intensity

Now, Georg Spiekermann of the University of Potsdam and colleagues have developed a third X-ray method that can determine both the atomic bond lengths in disordered matter and the number of direct neighbours an atom has – the so-called “coordination number”. An increase in coordination number under high pressure would be one sign of a heightened compaction mechanism. The new approach works by exciting a sample with X-rays, and then analysing the radiation emitted. The energy and intensity of a particular emission line – dubbed Kβ” – can be used to determine the co-ordination number and bonding distance, respectively.

Using the PETRA III X-ray source at DESY in Germany, the researchers applied the technique to compressed amorphous germanium dioxide – whose structure is analogous to that of the main content of magma, silicon dioxide. They found that even at pressures of 100 GPa (found in the mantle at a depth of 2200 km), the germanium atoms never have more than six neighbours. This is similar to that measured at 15 GPa, suggesting no special compaction mechanism.

Far reaching consequences

“Transferring this to silicate magmas in Earth’s lower mantle, this means that magmas with a density equal or higher than that of surrounding crystals can only be reached by enrichment of heavy elements like iron, ” Spiekermann explains. “The composition and structure of the lower mantle has far reaching consequences for the global transport of heat and for Earth’s magnetic field.”

James Drewitt, a geophysicist from the University of Bristol, comments, “This is an interesting result because it reduces the likelihood of a density cross-over between oxide magmas and the surrounding solid deep lower mantle”.

Drewitt, who was not involved in the study, also points out that “this result is in direct conflict with recent synchrotron X-ray diffraction measurements of both germanium dioxide and silicon dioxide glass at high pressure”. These studies indicate that local structural units with more than six oxygen atoms occur at ultra-high pressures. While more investigations are needed to resolve this controversy, he concludes, the new approach represents an important tool to study deep planetary interiors.

With their initial study complete, Spiekermann and colleagues are considering more complex materials, such as those that – like natural silicate melts – contain modifying oxide compounds. Studying these requires consideration not only of an atom’s nearest neighbours, but also atoms a bit further away in the “second coordination shell”.

“For example, the degree of polymerization of a network is a second coordination shell effect,” Spiekermann notes, adding: “We will show in the future that Kβ” is sensitive to the degree of polymerization of a glass, which is beyond the capabilities of other X-ray techniques.”

The research is described in the journal Physical Review X.

Printed scaffolds promote precision spinal cord repair

Researchers from California have used a rapid fabrication technique to produce tissue scaffolds that mimic the 3D architecture and mechanical properties of spinal cord tissue. These scaffolds can be used to enable nerve regeneration after acute spinal cord injury (Nature Medicine 10.1038/s41591-018-0296-z).

Spinal cord injury affects hundreds of thousands of people worldwide, with no treatment currently available. Healing is hindered by the lack of nerve regeneration in the injured spinal cord due to factors such as inflammation and glial scarring. Fabrication techniques such as 3D printing provide a means to generate scaffolds that can support and guide nerve regeneration, with the aim of regaining motor function. These scaffolds can be designed and produced to match the size and shape of the injury site.

Jacob Koffler

For the first time, researchers from the UC San Diego School of Medicine and its Institute of Engineering in Medicine have produced a biomimetic spinal cord scaffold utilizing microscale continuous projection printing. This technique allowed precise production of the injury scaffold, in as little as 1.6 s.

Key to the scaffold production was biomimicry — mimicking the natural structure and mechanical characteristics of spinal cord. The researchers produced a scaffold with microscale channels that facilitate axonal regeneration and guide the axons to stay in the same functional tracts as they bridge the injury site. The biocompatible scaffold itself was produced from a sturdy but cell-friendly hydrogel composed of polyethylene glycol–gelatin methacrylate (PEG-GelMA). This gel material supports cell viability and growth, while closely mimicking the mechanical properties of spinal cord tissue.

The researchers used rats with spinal cord injury to test their 3D printed scaffolds. In a biocompatibility experiment, they implanted rats with either the novel scaffold, a simple agarose scaffold or no scaffold at all. The PEG-GelMA scaffolds induced a reduction in immune response from the host, as well as promoting neural regeneration and axonal guidance into the scaffold, compared with the control groups.

The team then loaded neural progenitor cells (NPCs) into the scaffold, facilitating the formation of neuronal relays between the host axonal tracts (above the injury) and NPCs inside the scaffold. In turn, the NPCs send axons outside the scaffold to connect with the intact spinal cord tissue below the injury.

NPCs have been considered for spinal cord repair previously, but implantation within the required time frame is difficult due to the hostile environment of a spinal cord injury. Importantly, the new scaffold not only guided axon regeneration but also protected the implanted NPCs from the inflammatory environment of the injury. Animals implanted with NPC-loaded scaffolds showed significant nerve regeneration and regain of motor function.

The researchers examined the animals six months after transplantation and saw significant physical improvement in the group implanted with scaffold and cells compared with the controls. The authors note that the organization of host regenerating axons and NPC-derived axons through the channels of the scaffold was linear and tightly bundled. Electrophysiological analysis also showed that rats implanted with NPC-loaded scaffolds showed improved connectivity that was lost upon re-transection above the implant site.

Spinal cord injuries often carry high morbidity and poor prognosis, owing to insufficient regeneration of nerves following injury. This new technology may provide a significant step towards improved treatment by creating an environment that can be tailored to specific injuries to foster natural nerve repair. The biomimetic structure, with implanted, supportive neural progenitor cells, attenuates inflammation and promotes nerve guidance and repair.

The research team is currently looking to conduct trials on larger animal models, and aims to take the technology into human trials soon.

Why do urbanites travel so far?

Residents of the largest cities tend to travel further afield for leisure, even though they are environmentally conscious in other ways. Michał Czepkiewicz and Jukka Heinonen investigated in a systematic review in Environmental Research Letters (ERL).

Why did you examine why city-dwellers travel more?

There were several reasons. One is that air travel and tourism have high environmental consequences but this aspect is rarely studied; the research usually focuses on promoting tourism or looking at its impacts on destination countries. Only recently have we realized how high the impact on the climate is. According to a recent study, global tourism is responsible for about 8% of the global carbon footprint and it’s predicted to continue growing in the future.

There is now a lot of knowledge on daily travel in cities and the factors that affect it, such as public transportation, urban density, walkability or attitudes. At the same time, long-distance travel has largely been excluded from the equation. Looking for connections between urban form and air travel might seem far-fetched, but the studies we reviewed and our own research in Helsinki show that there’s a significant correlation. It’s an intriguing new topic that is under-studied and potentially relates to important policies.

The discrepancy between pro-environmental attitudes and amount of travel is another interesting topic that has not been much studied. People who are concerned about the environment tend to travel a relatively large amount, with a high carbon footprint. Many people limit their carbon consumption and use alternatives such as walking and cycling, eating vegan, recycling or avoiding generating waste but on holiday they take a break from being eco-friendly.

Going abroad two, three or four times a year, something that used to be very rare and only reserved for cosmopolitan wealthy elites has now become a norm, a basic need or even a social right

Studying the motivations behind travel for those who otherwise engage in low-carbon lifestyles connects to many interesting issues. For instance, how important is travel to human happiness, and is being able to travel required to live a good life? Going abroad two, three or four times a year, something that used to be very rare and only reserved for cosmopolitan wealthy elites has now become a norm, a basic need or even a social right. These are all intriguing aspects.

What’s significant about your results?

Our results highlight air travel and tourism as important parts of the carbon footprint of individuals and households. They also help to identify the social groups that contribute most to the carbon footprint of long-distance travel: the highly educated, high-income urban-dwellers, often young and without children. These groups have a high proportion of people who see themselves as environmentally-friendly and who have a relatively low carbon footprint on an everyday basis. Finally, the results highlight the correlation between urban density and air travel: the more centrally located and dense the urban environment, the more its residents travel by plane, on average.

What action is likely to result from your findings?

Our findings might help to spread awareness of the high carbon footprint of air travel and tourism. Such awareness might be a good first step for changing behaviour among those who are concerned about the environment. Many behaviours, such as driving, eating meat, not segregating waste etc. are now perceived as “dirty” from an environmentalist perspective. Flying is most often not perceived as such.

Our results may help to target awareness-raising campaigns to certain groups of people, namely young, educated and relatively wealthy urban-dwellers from developed countries. This group is responsible for the largest share of emissions but also many of them are concerned about the environment. As such, they have the potential to change their behaviour: choose a train instead of a plane, travel less frequently, or choose destinations nearby. Such a change – to “consume” only as much travel as is necessary or sufficient – could be part of a broader trend towards a degrowth (or post-growth) economy.

A common suggestion for policymakers is to increase taxation of aviation, not only through a carbon tax but also a value-added tax on kerosene or plane tickets. Private air travel is highly elastic: spending increases with increasing income. So higher ticket prices could limit travel somewhat. However, the effect could be limited to the less wealthy, and as such may not be equitable. Consequently there should be action towards limiting consumption among the wealthier part of society, besides taxation and raising prices.

Flying is also a substitute for private driving that has a high emissions intensity, meaning that if reduced driving comes in parallel with increased flying, overall emissions might well increase rather than decrease. This should be kept in mind when designing greenhouse gas mitigation policies.

There are no strong implications for urban planning but some urban conditions – lack of green space, high noise levels or population density – might provoke people to escape the city and take frequent breaks from urban life. We found some indication of such an “escape effect” in the interviews, so in future we may formulate some more refined suggestions for planners. On the other hand, the link between an urban environment and air travel may also be related to dispersed social networks of urban residents, globalization of their lifestyles, and a tendency to seek diversity and novelty on vacation as well as in their everyday lives.

How will you take your research forward?

We are currently conducting interviews with residents of Reykjavik Capital Region. Some questions are impossible to answer with cross-sectional surveys; we need to talk to people to understand their personal motivations and identify structures that influence their behaviour. The qualitative part will help us to advance the theory, answer questions we are already posing, and ask better questions in the future.

We would also like to replicate our study in more cities. First, we would like to target other Nordic capitals – Stockholm, Oslo, Copenhagen – and then other regions, including smaller cities without a major airport nearby. Currently, we are mostly interested in wealthy societies because of their high impact, but the growth of outbound tourism and air travel in regions such as Southeast Asia, Eastern Europe and Latin America is also increasingly relevant.

Speaking personally, I, Michał Czepkiewicz, would like to study this topic in Poland and other Eastern European countries. For a long time, there has been a mentality of catching up with the West in terms of wages, infrastructure, and consumption. The reality is that we have already reached a level of consumption very similar to those in the West, at least compared to the rest of the world, and surpassed sustainable levels of consumption. It would be good if people in Poland realized that we consume too much, not too little – just like the rest of the developed world.

Once a physicist: Will Foxall

Will FoxallWhat sparked your initial interest in physics?

When I was about seven, I went on a tour of Jodrell Bank Observatory with my primary school headteacher and her kids. I remember loving every bit of it, wanting to know how everything worked and then coming home with a pack of glow-in-the-dark stars with which I covered my bedroom ceiling. I even copied some of the constellations on the packet, so that my room had its own Plough and Cassiopeia.

Over the next few years as my dad’s photography business became increasingly digital, I cobbled together and upgraded my own computer from the outdated parts. It frequently broke and I developed a knack for problem-solving to get it up and running again.

As a sixth-form student I was fortunate enough to have really inspiring maths and physics teachers, Donald Steward and Lisa Greatorex, who made these subjects not only interesting, but fun. At the same time, Brian Cox started making appearances on BBC’s Horizon and, while I wouldn’t attribute too much of my decision-making process to a TV presenter, I guess you could class me as one of the early physics students in the “Brian Cox Effect”.

What did your physics degree focus on? Did you ever consider a permanent academic career?

While I discovered a fascination for particle physics and quantum mechanics in particular, I never lost that childhood wonder about space. For my final-year project, I found myself peering into the sky through the University of Bristol’s optical telescope on the roof of the physics department. We were asked to calibrate the sensor and then test it with some observations, which granted us special access to the roof at night. I remember getting particularly twitchy during consistently cloudy nights in the month before our project was due, which nearly jeopardized our final mark. But we got a window of clear nights at the last minute and managed to secure a first for the project.

At the end of my BSc I found myself keen to apply some of my knowledge in some different fields. My best marks were in the practical elements of my degree such as my final-year experiments and so further research was not for me. Retrospectively, perhaps the most useful bits of my degree were the programming and Physics World science-communication modules that the university was running.

How did your interest in the arts, especially television and film technologies, emerge?

I come from a very creative family. My parents are both art teachers turned photographer and graphic designer, and my sister has worked with a host of performing-arts organizations. Some of that creativity must have rubbed off on me along the way as I spent my teenage years playing music and creating short films with my friends.

After graduating from university, I was looking for opportunities that could use the analytical approach gained from my physics degree, while reconnecting with the arts that I enjoyed as a teenager. As a result, I joined Bristol’s television industry as a runner and worked my way up through a number of technical roles, looking after some exciting natural history shows for the BBC and multi-screen cinemas in Japan.

When 360 video and VR began to boom, I started app development which introduced me to some of the innovative creative technology work that happens in Bristol.

What does your current role as “creative technologist” entail? What projects are you working on at the moment?

The South West Creative Technology Network is a partnership between four universities (UWE, Bath Spa, Falmouth and Plymouth) as well as the Watershed media centre in Bristol and the Kaleider production studio in Exeter. It’s a knowledge-exchange programme that creates connections across academia and industry in the South West to create innovation in three areas of interest; immersion, automation and data. As a creative technologist, I get involved in all sorts of fascinating conversations with research fellows and prototypers working on these themes. I try to identify the technical hurdles they may encounter and then help work out the best route to tackle them as they arise.

The projects we’re working on include the use of motion-capture data to improve mobility in the elderly, the creation of new musical instruments in virtual reality and extending the story of a theatrical performance beyond the confines of the stage.

How has your physics background been helpful in your work, if at all?

I’d say that, in particular, I improved two skills through studying physics, and they have been invaluable in the path I have chosen since my degree. First, a solid understanding of some of the core concepts that many specific areas of physics build on, whether that’s mathematical methods or how to derive equations. Second, and the most transferable skill, is the ability to break a problem down into a variety of approaches and then systematically solve it.

Any advice for today’s students?

If you have an idea of where you want your interests to take you, then stick to that goal and go for it. That’s what got me to the university I wanted to go to, studying the degree I picked. However, if you don’t, that’s where it gets really exciting; most of my decisions since graduating have been what I consider the “best choice available to me at the time”, which has led me to where I am now. And I’m very happy with that!

Copyright © 2025 by IOP Publishing Ltd and individual contributors