Skip to main content

Modelling insights help vacuum end-users save money and fast-track innovation

Developing and implementing a new vacuum system from scratch – even optimizing an existing vacuum installation – is rarely a straightforward exercise. Systems-level planning is mandatory, with due consideration assigned to the vacuum chamber design, pumping set-up, pressure gauges, materials inventory, leak detection and all manner of ancillary equipment. Notwithstanding those fundamental technology decisions, a host of other parameters must also come into play, including capital/operational costs, energy consumption, size and footprint, maintenance intervals and acceptable levels of noise and vibration. More often than not, it seems, vacuum systems are also complex systems, with each of the aforementioned variables all “weighted” differently depending upon an individual project’s objectives and performance specifications.

For the vacuum specialist Edwards, which serves a diverse base of scientific, instrumentation OEM and industrial end-users, the key to success lies in helping customers to figure out the right product and the right functionality for the job in question – and as quickly and cost-effectively as possible. Underpinning that commitment is a suite of in-house software tools for vacuum system design and a “modelling-as-a-service” capability that lets customers access the broad domain knowledge and technical know-how of Edwards applications, engineering and science specialists.

“Our modelling service enables customers to understand what’s possible – and just as important what’s not – versus the provisional technical specifications of their vacuum system design,” explains Russell Coleman, scientific market sector manager at the Edwards Global Technology Centre in Burgess Hill, UK. While managing expectations is always helpful, the evidence-based insights from the Edwards modelling team also mean that customers – especially those who are not necessarily vacuum experts – avoid over-engineering their vacuum requirements. That can mean significant savings versus upfront capital spend as well as reduced running and maintenance costs over time.

“We can get to the right answers faster by working in partnership with the end-user from the early stages of their vacuum project,” adds Coleman. “That translates into a scalable competitive differentiator – one that puts us in an advantageous position to win business at the end of the design process.”

Software at your service

There are three main building blocks to Edwards’ in-house modelling software: PumpCalc provides an entry-level toolkit for modelling of simple vacuum systems, with some Edwards field-sales engineers trained in its use to support routine customer requests; TransCalc is a more advanced, network-based software module tailored for the design of complex vacuum systems – everything from an ultrahigh-vacuum accelerator beamline to a steel degassing facility; while HSM Toolkit is a more focused software tool to support mechanism modelling within advanced split-flow turbopumps.

HSM software toolkit

“We are able to model all the core physics of the pump-down in a complex vacuum system,” notes Coleman. “That includes significant progress on dynamic modelling, tracking time-dependent properties like chamber outgassing, reductions in gas temperature inside the chamber, as well as changes in the rotational speed of the high vacuum and backing pumps.”

With pump-down time a rate-determining step for the cycle-time of many vacuum applications, that modelling capability is being put to good use by a sizeable cross-section of Edwards’ research and industry customers. “Our in-house software means that instrumentation OEMs, for example, can try out new ideas to sanity-check their prototype vacuum system designs,” notes Coleman. “Ultimately, the use of modelling means less trial-and-error during preproduction, effectively streamlining the product development cycle so the customer can home in on workable and manufacturable solutions sooner.”

It cuts both ways

A case study in this regard is Pyramid Engineering Services, a UK designer and integrator of high-precision welding systems for the hermetic sealing of metal-can semiconductor and electronic packages. Those production systems – which span laser, seam, projection, spot and cold welding techniques – can be supplied as stand-alone workstations or custom-designed to integrate with other turnkey systems like gloveboxes, airlocks or vacuum gas-bake ovens in a customer’s existing manufacturing facility.

“We’ve been a customer of Edwards for many years, so we know their vacuum products and their capabilities really well,” explains David Watkins, Pyramid’s operations manager. In fact, it’s very much a two-way relationship, as Pyramid is also a supplier to Edwards (its projection welding systems being used in the manufacture of Edwards’ latest line of vacuum gauges). For Watkins the customer, though, the Edwards modelling-as-a-service capability has proved to be invaluable for the redesign of the vacuum systems on Pyramid’s industrial welding machines – either as a natural progression on the company’s own technology roadmap or in response to bespoke one-off requests from Pyramid’s end-users (for example, a non-standard geometry for a vacuum bake-oven or an alternative surface treatment of the chamber walls).

Pyramid’s industrial laser-welding machines

“In terms of our inputs,” says Watkins, “the Edwards modelling service works like a ‘black-box’ solution. All we need to do is send across the engineering drawings of the new-look vacuum chamber, outlining the vacuum level we want to achieve and on what timescale.” The Edwards applications team will then do the rest, using their in-house software to generate an optimum backing/turbo pump combination plus cost profile. For Watkins, it’s all about due diligence. “Those quantitative, physics-based insights provide reassurance to our customers,” he adds, “confirming that we can deliver a welding system that meets their required vacuum performance at the right price point.”

Blue-sky research

Meanwhile, in an altogether more rarefied context, Edwards’ modelling capabilities are helping physicists at the University of Oxford, UK, address some complex vacuum challenges of their own. The physicists in question are working on the Square Kilometre Array (SKA), an international research effort to build the world’s largest radio telescope which, when it comes online later this decade, will enable astronomical surveys of the sky with unprecedented detail and at unmatched speeds.

The vacuum requirement centres around an SKA work package that Oxford is managing for the design of a multifrequency receiver module – known as a single-pixel feed (SPF) – that will eventually be deployed at scale across the SKA’s network of 133 radio antennas in South Africa’s Karoo Desert (an array that will extend over a 150 km diameter with a total collecting area of 32,600 m2).

SPF receiver module

The SPF module itself is designed to house up to five detector feed horns (covering different GHz frequency bands) in a single, modular cryostat operating under vacuum. Herein lies the problem. “Essentially we realized, given the initial constraints on our cold-head’s cooling capacity, that achieving a low enough pressure during cool-down using a scroll pump alone would be difficult,” explains Jamie Leech, Oxford’s SKA SPF Band 345 technical lead. “We have to maintain a low enough cryostat pressure during the cool-down or else residual gas conduction will dominate and prevent us from achieving working temperatures in the receiver module [around 10 K].”

The workaround, says Leech, was initially elaborated by the Oxford SKA team, then subsequently modelled by Edwards via a series of pump-down calculations to confirm the suitability of a turbopump (bolted to the side of the central vacuum hub) to get the pressure down below 1×10-4 mbar before and during cool-down. “The scroll pump would yield a starting pressure in the region of 3×10-2 mbar,” notes Leech. “That isn’t low enough to start cool-down without the risk of excess heat load or substantial cryo-condensate formation, but it is low enough to bring on a small high-vacuum turbopump, with the function of the scroll pump switching from roughing to backing.”

He concludes: “The SKA project needed accurate quantitative modelling, so the pump-down calculations were vital in making the case for the cryostat’s revised pumping set-up.”

  • Edwards will be exhibiting at Lab Innovations in Birmingham, UK, from 3-4 November and Precisiebeurs in ‘s-Hertogenbosch, Netherlands, from 10-11 November.

‘Mellow’ supermassive black holes could be creating mysterious cosmic particles

“Mellow” supermassive black holes (SMBHs) at the centres of some galaxies could be the source of mysterious low-energy gamma rays and high-energy neutrinos that have been seen by some observatories, according to physicists in Japan and the US. Shigeo Kimura at Tohoku University and colleagues came to this conclusion by developing models of processes that occur when matter falls into SMBHs. Their results provide guidance to future experiments that will search for the sources of cosmic gamma rays and neutrinos.

The universe is filled with energetic cosmic particles including photons, neutrinos and protons. These are believed to be produced by violent astrophysical processes such as those occurring in exploding stars (supernovae) or in active galactic nuclei (AGN). The latter are regions found at the centres of some galaxies, where material accretes onto an SMBH forming a hot, extremely bright plasma.

Today, astrophysicists do not understand the origins of all the particles that have been seen by detectors such as the IceCube Neutrino Observatory and the Neil Gehrels Swift Observatory, which detects gamma rays. For example, the origin of soft gamma rays in the megaelectronvolt energy range is a mystery as are the origins of high-energy neutrinos in the petaelectronvolt range.

Quiet objects

Kimura’s team focused on SMBHs that are much quieter than a typical AGN – these are mellow objects that accrete lower quantities of material. Since the plasma that forms around these bodies is less dense, it is far less efficient at radiating heat, and can reach temperatures of tens of billions of degrees.

Under these conditions, photons are emitted by fast-moving electrons as they change direction. These photons are then scattered by other fast-moving electrons in the plasma, which can boost photon energies into the megaelectronvolt range. In addition, protons within the plasma can be accelerated to extremely high energies, via processes including turbulence and magnetic field reconnection. As the protons collide with other baryonic particles, they can create neutrinos in the petaelectronvolt range.

Although mellow SMBHs are far dimmer than AGN – which produce higher-energy gamma rays – they are believed to be far more numerous throughout the universe. As a result, these quiet black holes could account for the observed low-energy cosmic gamma rays and high-energy neutrinos.

The team’s predictions cannot be confirmed by observations today – this will have to wait for the next generation of gamma-ray and neutrino observatories to come online.

The research is described in Nature Communications.

Physicists get under the skin of apple growth

Researchers in the US have used the physics of singularities to study the recess, or cusp, that forms around the stalk of an apple. Based on field and laboratory experiments as well as simulations, they determined that the cusp is self-similar, meaning that it looks the same at different stages of the apple’s growth. They also investigated the emergence of multiple cusps, as are sometimes seen in real fruit.

Singularities are points at which a certain quantity becomes infinite or ill-defined. The infinite space-time curvature thought to exist at the centre of black holes is one well-known example, but singularities also crop up in other areas of physics. In biology, meanwhile, examples include the sharp folds on the surface of the brain and the way bacteria clump together in the presence of certain chemicals.

Move over, Newton

The latest research sees Lakshminarayanan Mahadevan and colleagues at Harvard University explore the singularity created by the abrupt change in the orientation of the apple’s surface at the base of its stalk. In a paper published in Nature Physics, they describe how this singularity develops as the apple grows from a slight bulge in the stem of a blossom into a fully-formed fruit with a seed-containing core, a fleshy cortex surrounding it and a tough outer skin.

To make their observations, Mahadevan and colleagues began by studying the shapes of 100 apples picked at different stages of their growth from the orchard of a college, Peterhouse, at Cambridge University, UK. By slicing each apple in half, they created a series of cross-sections, then arranged them in order as if they were stills from a film depicting the changing shape of a single apple.

The team found that apples measuring less than about 1.5 cm across displayed no discernible cusp, while those with a diameter of more than 3 cm had a distinctive dip at the base of the stalk. This is because in the early stages of the apple’s growth, the contour of the peel varies smoothly. As the cortex starts to expand more quickly than the core, however, a bulge forms away from the core and a discontinuity appears in the apple’s perimeter.

Harvesting data

Next, the researchers analysed the apple’s shape by defining its cross-sectional profile as a one-dimensional curve with a height that depends on both the distance from the stalk and the size of the apple. After generating Taylor expansions of the height and distance variables in terms of the size, they succeeded in expressing the apple’s profile in a self-similar way.

To establish whether real apples also display this self-similarity while approaching a cusp-like singularity, Mahadevan and co-workers rescaled the height and stalk-distance axes using appropriate coefficients and then plotted each apple’s profile. They found, as expected, that the measured profiles all overlapped with one another near the cusp – tracing out what they describe as a “universal curve”.

The researchers went on to confirm this self-similar scaling in three ways. First, they carried out a dynamical analysis on an expanding sphere with its growth restricted at the centre but constant further away. Next, they created a mechanical simulation that treats apples as neo-Hookean materials, meaning their stress-strain curves plateau as they grow. Lastly, they performed experiments using artificial apples made from polymer spheres that swelled when immersed in hexane. By using a second, un-swellable polymer to represent the stalk, they found that a cusp formed within an hour of immersion in the solvent.

On the cusp of greatness

As a final step, the researchers investigated apples with multiple cusps, each of which creates a separate groove on the fruit’s upper surface. Using simulations, they showed that the quantity of cusps depends both on the number of carpels – that is, the apple blossom’s seed-bearing structures – and the ratio of the apple’s diameter to the diameter of its stalk. They confirmed this diameter-ratio dependence in further experiments with the polymer spheres, and they claim that it is also present in their data from real apples.

Mahadevan says that the research was prompted by simple curiosity, rather than any practical end. But he argues that by quantifying apple growth, he and his colleagues have sharpened some outstanding questions – including why the region near the stalk grows more slowly and what biochemical processes are involved. “This will hopefully give us a still deeper view of how nature works,” he says.

Jens Eggers of Bristol University in the UK is enthusiastic about the research but questions whether the Harvard models fully agree with the field data. In particular, he says, it is not completely clear whether the results from real apples show a correlation between cusp number and diameter ratio.

But, he adds, extracting quantitative, testable results from biological data is not easy. “By this measure the paper is doing quite well,” he says.

Frames of reference in science and culture, and how they influence progress

As a Christian, I often consider my faith to be my frame of reference. I’m a Yoruba from Ilesha in Osun State, Nigeria, and was born and raised in the city of Lagos, but I moved around a fair bit during my early years. I attended two British international schools in Nigeria before going to the UK, where I completed my secondary education at Sevenoaks School in Kent. My first degree was in physics at Imperial College London.

I mention all this because I think it strongly informs my engagement with a fundamental concept that the theoretical cosmologist Chanda Prescod-Weinstein explores brilliantly in her new book The Disordered Cosmos: a Journey into Dark Matter, Spacetime, and Dreams Deferred. The concept is that the development of knowledge, be it scientific or any other kind, cannot be completely separated from its social, historical or political context. While this idea is already backed by much historical and scientific evidence, The Disordered Cosmos portrays it from many more viewpoints than I have ever considered.

After beginning the book with a section on cosmology and particle physics, Prescod-Weinstein delves into the culture of the mainstream scientific community, and how it has influenced the progress of science. To illustrate this, she draws deeply on her personal experience as an academic physicist who is a Black feminist, raised by generations of powerful women, and a descendant of Indigenous Africans. Prescod-Weinstein rigorously and extensively makes the case for an urgent paradigm shift in the way we engage with science, knowledge and technology, and how we define what is now popularly known as Afrofuturism – the exploration of the interplay between African culture and technology.

Frames of reference, as Prescod-Weinstein lays out, are present in many wide-ranging fields of study. Perhaps most obviously to physicists, they are a core concept in special and general relativity, but they recur elsewhere: in the highly abstract group theory they are called representations; in the Bible they are referred to as prophecies and visions; in modern English they are often called perspectives; and in feminist theory they are called standpoints.

Although these different examples cannot be mapped perfectly one-to-one onto each other – due to their varying contexts, which inform their axioms and resulting inferences – they share a fundamental similarity: they primarily mean to look at something that is either concrete or abstract from just one among many possible viewpoints. Prescod-Weinstein draws several parallels between the examples, and although I do not agree entirely with all the analogies and equivalences made, I do believe there is a lot of truth in her analysis, which is deserving of further philosophical and scientific study.

Among the most important questions that Prescod-Weinstein discusses is how we can achieve a world where people can hold opposing views without erasing each other’s identities, or forcefully imposing one belief system over another. The Disordered Cosmos suggests one salient solution: sacrifice.

From the perspective of emotional and intellectual resilience, Prescod-Weinstein describes “scientific…and emotional housework”, detailing various personal experiences from her career. She explains how they showed her not only that science is a collective effort that includes non-scientists, but also that pushing for better engagement with science for historically marginalized people means sacrifices must be made. The additional burdens experienced by researchers from minoritized groups – giving up research time to serve on “diversity, inclusion and equity” committees, acting as a mentor to researchers from minority groups – are rarely acknowledged by hiring committees or in performance metrics.

These anecdotes reminded me of when I had my first major paradigm shift, during my time at Sevenoaks School, prompted by my realization that we live in a world of finite resources driven by different interests competing to control them. The International Baccalaureate (IB) programme I studied there focuses on stimulating the mind not just towards learning, but also probing and questioning the process of learning itself. Interestingly, Prescod-Weinstein invokes the concept of “ways of knowing”, which I first came across in an IB module on the theory of knowledge. The Disordered Cosmos is only the second place I have ever seen the phrase used in that way.

Elsewhere in the book, Prescod-Weinstein describes some of the struggles she faced while studying physics and astronomy at Harvard University. Concerned about the treatment of Black employees, she fought tirelessly for the Harvard Living Wage Campaign, eventually winning higher wages for janitors, but these extra efforts impacted her performance academically. She still completed her course, however, and the year she graduated with a degree in astrophysics, she was the only Black American in the US to do so.

As an indigenous Nigerian, I experienced a huge culture shock when I moved to the UK and started school at Sevenoaks, so I feel keenly attuned to these struggles. I often feel like nothing could ever have prepared me for Sevenoaks. Being an indigenous Nigerian means that, unlike Prescod-Weinstein, my predecessors were never barbarically shipped off the coasts of West Africa for slave labour. Nevertheless, I have also suffered the detrimental effects of capitalism.

The sections of The Disordered Cosmos that recount the careless mining of uranium on Indigenous reservations and the fallout left for Pacific Islanders after nuclear weapons tests reminded me of how an Italian company in 1988 dumped hazardous waste in the land of Koko within Warri, where my mother is originally from. This poisoned the rivers and surrounding lands and caused international uproar.

Another probable result of urbanization and the associated pollution that I personally had to live with, is that my younger brother and I had asthma as babies until our preteen years. We had inherited it from our father whose asthma was so severe that he had to carry an inhaler. My mum prayed earnestly that we would all be healed, and this is exactly what happened. This is one of the many miracles we have experienced, which is why I believe in Jesus Christ. This is one area where my beliefs diverge from Prescod-Weinstein, who is a humanist.

Nonetheless, I have immense respect for her, and with this book she achieves an astonishing blend of scientific depth and an intricate understanding of the interplay between science and society. I would like to live in a world where scientists – and people in general – can agree to disagree cordially and respectfully. Like Prescod-Weinstein, I want to believe that science and technology can help to bring this about. It is something I continue to work on, and I do not think I or any Africans need permission from others to define who we are, or how we make our future engagements with science.

  • 2021 Little, Brown £20hb 336pp

Solar farms keep the neighbourhood cool, inspecting solar panels in broad daylight

The Sun may be fading fast here in the northern hemisphere, but the number of solar panels installed here in the UK and elsewhere continues to grow by leaps and bounds. As the area of land covered by solar panels increases, have you ever wondered if converting all that solar energy into electricity is affecting the local environment?

The answer is yes, at least in arid ecosystems, according to a team of researchers at the UK’s Lancaster University, Ludong University in China, and the University of California Davis in the US. They studied two large solar farms – one in California and the other in China’s Qinghai province – and found that the facilities were cooler than their surroundings. This was based on satellite temperature observations taken before and after the installation of solar panels and, in the case of the California facility, measurements taken on the ground.

Both solar farms created “cool islands” that extended about 700 m from the perimeters of the facilities. The cooling effect was significant, with land surface temperatures about 2 °C cooler 100 m from the edges of the solar farms.

While lower temperatures could be a good thing, the researchers caution that they could have a negative effects on some flora, fauna and agriculture – which should be accounted for when sites for solar farms are chosen. You can read more about the study here.

Blinded by the light

Spring cleaning is a ritual in many cultures and at higher latitudes I think its origins might be related to the return of bright sunshine in the spring, which reveals just how dirty a home has become over the winter. You might think that bright sunshine is ideal for looking for defects in installed solar panels – but it turns out that broad daylight is the worst time to do an inspection. This is because the current technique identifies defects using electroluminescence, which involves detecting tiny amounts of light under darkroom conditions.

Now, scientists at the Nanjing University of Science and Technology in China have developed new hardware and software that can detect and analyse defects in solar panels in bright light. The technique involves stimulating electroluminescence using a modulated electrical current. This causes the modulated emission of light from the panel, which is detected using a high-speed camera. By focussing on the changes in light intensity from the panel – and filtering out some sunlight – the team was able to spot defects. You can read more about the imaging system here.

 

Electrons flow like a fluid in a metal superconductor

A team of researchers in the US has discovered that electrons in a transition metal superconductor called ditetrelide flow like a fluid rather than behaving as individual particles. The finding, which is connected to the physics of electron–phonon liquids, could shed fresh light on the fundamental properties of these technologically important materials and their potential applications.

Electrons usually travel through metals via diffusion, getting scattered by phonons (quasiparticles that arise from vibrations of the crystal lattice) along the way. A recently developed theory, however, suggests that under certain conditions, a coupled electron–phonon liquid can form in which the electrons transition from a diffusive (particle-like) flow to a hydrodynamic (fluid-like) one. In this case, the electrons would flow inside the metal like water flows through a pipe.

The theory also predicts that these electron-phonon liquids should form when certain other interactions (including Umklapp electron–electron scattering) are suppressed, allowing electrons to transfer momentum to the material’s crystal lattice. The electrical and thermal conductivities of such liquids should be higher than those of conventional (Fermi) liquids, in which electrons propagate through metals with weak electron–electron correlations. Until the current study, however, such liquids had not been seen in the laboratory for lack of suitable materials to experiment on.

Three different experimental techniques

Researchers led by Fazel Tafti of Boston College have now found evidence for an electron–phonon liquid in niobium germanide (NbGe2), a superconducting metal also known as ditetrelide. The team studied the behaviour of electrons in this material using three different techniques. The first, quantum oscillations, revealed that the effective mass of electrons was three times higher than the expected value, implying that electron–phonon interactions are present. Second, electrical resistivity measurements revealed a discrepancy between the experimental data and the values expected for standard Fermi liquids. Finally, Raman scattering showed a change in the vibration of the NbGe2 crystal thanks to the fluid-like flow of electrons. In addition, the team found that the Raman spectra of the phonons at different temperatures fit best in a model that takes into account phonon–electron coupling.

The researchers, who report their work in Nature Communications, say that it would now be interesting to conduct more direct experiments on NbGe2 to verify the hydrodynamic behaviour of its electrons. “Our work implies that the heavier-than-expected effective electron mass comes from strong electron–phonon interactions and we have demonstrated this for the first time in a metal superconductor,” says team member Hung-Yu Yang. To back up this finding, Yang suggests that one possible test would be to shrink the sample size to the nanoscale and see if it behaves differently, just as it becomes more difficult for water to flow through a pipe as the pipe gets narrower. “Another direction would be to find more electron–phonon liquid candidates using the design principle we have proposed,” he tells Physics World.

Why the UK should build its own X-ray free electron laser

What is an X-ray free electron laser (XFEL) and what kind of science can be done there?

Adam Kirrander: An XFEL is a linear electron accelerator that generates light via a process called self-amplified spontaneous emission or SASE. As a technology it falls somewhere between lasers and synchrotrons and produces light at energies from hundreds of electron volts up to many thousands of electron volts. The light is coherent and emerges in extremely short (sub-femtosecond) pulses. XFELs are also very bright, being many, many orders of magnitude brighter than a synchrotron. 

These unique properties open the door for new research across a broad range of physics, chemistry and biology. We can study chemical processes such as catalysis, study quantum materials and determine the structures of biological molecules. Moreover, XFELs can be used to study matter under extreme conditions that normally only exist in the interiors of stars or giant planets – and they can also be used to probe beyond the Standard Model of particle physics.

The UK is a member of the European XFEL in Germany and British scientists can have access to other XFELs worldwide. Why does the UK need its own facility?

Jon Marangos: Existing and future XFELs all have different characteristics, they are not all the same. I use the Linac Coherent Light Source XFEL in California because I need its sub-femtosecond pulses to study really fast electronic dynamics. So today I can’t really use the European XFEL, but that will change in the future.

The number of UK scientists using XFELs has grown rapidly in the past decade from zero to about 500. While many of these scientists use the European XFEL, they move from one international facility to another to get the best possible experimental conditions for their research. If we were to build a next-generation XFEL in the UK, we would expect it to be a fully international facility. 

What would the UK-XFEL look like?

AK: The facility’s electron accelerator would be about 1km long and would probably be built in a cut-and-cover tunnel. We want to build a superconducting accelerator with a very high pulse rate. Today’s XFELs run at about 100 Hz but we want to reach the 100 kHz level. The accelerator will drive multiple undulators so we can simultaneously deliver X-rays at different energies to many different experiments in a process called multiplexing.

Our vision is a next-generation facility that does some things that are not currently possible at existing XFELs. One thing we are looking at is the possibility of bringing two X-ray pulses of completely different photon energies together in one experiment or combining electron beams with X-ray photons.

European XFEL

What have you and your colleagues done so far to make the case for UK-XFEL? 

AK: This is an ambitious undertaking, and it must be done properly. Over the past 18 months, a large team of scientists has made a very broad assessment of how an XFEL could benefit science and technology in the UK and beyond. This science case has been carefully evaluated by an international panel, which has found it convincing. Now, we need to start thinking about the technical design and eventually there will be a discussion about a location but it is premature to do that now.

JM: At this point in our campaign, we are asking for money to do the conceptual design. We will use this money for technical designs, options analysis and to build our business case carefully. Our proposal would be reviewed by the UK government in about two or three years to decide whether the country will invest in UK-XFEL. The review is conditional on an agreement to fund the next stage of development.

What is next for your campaign?

JM: The next phase will last between two and three years. And it will require some pretty intensive work by some of the UK’s top accelerator scientists and people involved in lasers and photon systems. After that, and if we get the green light, we will go into what’s called a technical design phase where the full details of the facility will be worked out. That will typically take another two years and result in an even heftier volume of material, which would essentially become the blueprint for building the facility. A site can be selected while the technical design is being prepared and construction can start. If everything goes smoothly, we could have first light at UK-XFEL in 10 years, which would be consistent with the timescales of other XFELs

You mentioned choosing a location. Do you see UK-XFEL located at a national lab like Daresbury or Rutherford Appleton, or could the facility be built at a university?

JM: Because of the size of the machine, it’s probably not going to be easy to locate it near the average university. But in principle, yes, it could be built in many locations in the country. It could also be built at one of the existing national research facilities – but it doesn’t have to be. Given the UK government’s current agenda to “level up” deprived parts of the country UK-XFEL could even become part of a new regional science facility – which I think could be a very good thing.

Past, present and the future of gas aggregation sources for nanoparticle synthesis

Want to learn more on this subject?

In this seminar, Yves Huttel will present the gas phase synthesis of nanoparticles from a general point of view including historical aspects, main characteristics and examples of nanoparticles generated with this technique1.

After this introduction, Yves will present some recent studies performed using the gas aggregation sources (GAS). He will focus on the proposed applications and challenges that the technique is facing in material science and nanotechnology. In particular he will discuss the possible reasons why the GAS has not penetrated yet the industrial sector although their use for several applications has been proposed.

We will explore some of the issues of the GAS that may explain why they are not implemented in industrial processes as the limitation of the yield of nanoparticles and the short time stability. Possible solutions to overcome these limitations will be presented like the use of a Full Face Erosion magnetron and the injection of controlled doses of gas impurities2. You will see that stable and high fluxes of nanoparticles open the route to real applications.

1 Gas Phase Synthesis of Nanoparticles, Wiley-VCH Verlag GmbH, 2017.
2 Y Huttel et al., MRS Communications 8, 947 (2018).

Want to learn more on this subject?

Yves Huttel received his PhD from the University of Paris-Sud, Orsay, France. After his degree he worked at the Synchrotron LURE, France, at the University of Paris-Sud, France, and at the ICMM-CSIC, Spain. He was also a postdoctoral researcher at the Synchrotron of Daresbury Laboratory, UK, before returning to the CSIC at the IMM. He joined the Surfaces, Coatings and Molecular Astrophysics Department at the ICMM, that belongs to the Consejo Superior de Investigaciones Científicas (CSIC), Spain, with a Ramón y Cajal Fellowship. Since 2007, he has been working at the ICMM as a permanent scientist and leads the Low-Dimensional Advanced Materials Group. His research focuses on low-dimensional systems including surfaces, interfaces and nanoparticles, as well as XMCD, XPS and nanomagnetism.



Novel decoder helps people with paralysis click-and-drag a computer cursor using just their thoughts

Disability can affect anyone; either directly, over the natural course of our lives, or indirectly, by knowing or caring for a person with a disability. It’s difficult to fathom how profoundly different (and challenging) daily life is after stroke or spinal cord injury, or for those with cerebral palsy or amyotrophic lateral sclerosis (ALS).

Computers can significantly improve the quality of life for people living with motor impairment, by allowing web access to social media, games and messaging, for example. But after a catastrophic injury or disease, actions that we typically take for granted, like clicking or scrolling, become ferociously tricky for those with impaired hand function.

A brain–computer interface (BCI) is a promising and increasingly popular avenue for assisting and improving control following motor paralysis. People with paraplegia or quadriplegia, for example, have used BCIs to move computer cursors with their thoughts for decades; yet they have been unable to click-and-drag.

In a paper published in the Journal of Neural Engineering, a team of neural engineers at the University of Pittsburgh describe a new algorithm for deciphering brain signals in BCIs. By applying machine learning techniques to data recorded from an implanted BCI, the researchers improved cursor control and computer accessibility for people who are unable to move a mouse physically.

Decoding brain signals

Our thoughts and behaviours arise from trains of action potentials flashing across vast, interconnected networks of neurons. A BCI measures those brain signals, analyses them for certain features, and then translates the extracted features into commands to execute the desired action. Thus, BCIs circumvent the normal output pathways of muscles, which illness or infirmity may compromise.

BCIs are not mind-reading devices; they do not extract information from unsuspecting or unwilling users. Instead, the user “works with” the BCI via their brain signals, so that they can actively participate in the world without using their muscles. Making sense of the massive datasets generated by these brain signals requires training periods, which also helps us understand some of the mysterious operations of the brain.

First author Brian Dekleva, from the university’s Rehab Neural Engineering Labs (RNEL), used surgically implanted BCIs to decode movement intent in two people with quadriplegia, with the aim of improving the functionality of BCIs for computer access.

Dekleva and colleagues began their investigation by deconstructing a hand grasp. They employed a well-established machine learning technique, Hidden Markov Models, to rigorously characterize the three sub-routines that make up a grasp action: deciding to grasp (onset), holding a grasp (sustained) and deciding to release (offset).

What sets the researchers’ BCI apart is that it doesn’t just examine the persistent neural signals generated when we want to move or click a cursor. Instead, their decoder looks at transitions between states, which are detected more reliably than a sustained response. The team’s technique is also “generalizable”, because it is suitable for a variety of computer applications that require point-and-click or click-and-drag functionality.

Using the BCI with the new decoding algorithm, the study participants could smoothly sweep their cursors across a monitor, be it for a creative outlet (like painting a digital work of art) or something more routine (like simply dragging a file to the trash).

A promising track record

Most of the research in neural engineering pursues clinically useful systems for people with significant impairments that lead to disability. In May of this year, the RNEL team published a proof-of-principle for a bidirectional BCI – a type of BCI that enables not just data reading but also data writing abilities. In other words, a bidirectional BCI enables patients with paralysis to control a robotic arm with their thoughts and also feel how hard that robotic arm is clutching an object.

Controlling a robotic arm with the mind

BCI technology promises to enhance the quality of life for people with paralysis by improving their autonomy and mobility. Jennifer Collinger, the senior author on this latest study and one of the lead architects of the bidirectional BCI, hopes that these results can inform the development of clinical BCI technology – an area experiencing rapid growth within the biotech industry.

The team’s latest experiment was also proof-of-concept for remote clinical trial participation with BCI tech in the home, with one of the participants performing most of the study’s training sessions at home without assistance from the researchers. This is a critical step towards clinical translation.

The study’s success in enabling a natural and generalizable control scheme for computer access provides increasing evidence that BCI studies no longer need to be restricted to an on-site lab. “The pandemic accelerated our plans for in-home testing, but this has been a goal for a long time,” explains Collinger. “We need to get the technology into real-world environments… We just want study participants to be able to do the things they want to do with a BCI.”

Nanocolumnar films: applications in medicine, energy and aerospace

Want to learn more on this subject?

This webinar will start with showing that nanocolumnar films can be manufactured by magnetron sputtering using the glancing angle deposition configuration. Then, various applications of these nanocolumnar films in several fields will be presented, namely in medicine (antibacterial coatings, bioelectrodes for electric stimulation, SERS substrates), energy (black metal coatings, nanostructured layers for perovskite solar cells, photo-induced self-cleaning surfaces) and aerospace industry (anti-multipactor coatings).

Want to learn more on this subject?

Dr J M  García-Martín is a research scientist at the Institute of Micro and Nanotechnology, CSIC. After obtaining his PhD in physics at UCM (Spain) in 1999, he was a Marie Curie postdoc at Paris-Sud University (France) in 2000–2002. He joined CSIC in 2003 and secured a permanent position in 2006. In 2014, he led the project “Nanoimplant” that won the IDEA²Madrid Award (a partnership of Comunidad de Madrid in Spain and MIT in USA). In 2017, he was a Fulbright Visiting Scholar at Northeastern University (USA) and in 2020, co-founded the spin-off Nanostine. He works in metallic nanostructures with applications in magnetism, plasmonics and biomedicine. He has co-authored 99 articles, six book chapters and three patents, and his H-index is 33 (WoS).


Copyright © 2026 by IOP Publishing Ltd and individual contributors