Skip to main content

Table-top laser delivers intense extreme-ultraviolet light

The smallest source capable of delivering high-intensity pulses of extreme-ultraviolet (XUV) light has been claimed by researchers at ELI-ALPS in Hungary, INCDTIM in Romania, and the Max Born Institute in Germany. The new optical technique could make high-intensity XUV pulses far more accessible to labs worldwide, opening up new possibilities for high-speed nanoscale imaging.

Although intense pulses of lower-frequency UV light are routinely generated in many research facilities, comparable intensities at higher XUV frequencies had proven far more difficult to produce. Currently, the most compact sources use a technique called high harmonic generation (HHG), whereby a target material is placed at the focus of intense near-infrared (NIR) femtosecond laser pulses. This causes the target to strongly emit light at higher harmonic frequencies to the NIR light, which conveniently lie in the XUV range.

To generate suitably intense XUV pulses, the NIR pulses must be focussed onto large target areas using large spherical mirrors, requiring an apparatus that exceeded 10 m in size. So far, these bulky and expensive setups can only be implemented at a limited number of research facilities.

Highly pressurized gas jet

Now, Balázs Major and colleagues have developed a more advanced HHG technique. This has a highly pressurized gas jet acting as the HHG target, but crucially the target positioned some distance away from the focus of the NIR light. Since the NIR beam is spread out as it encounters the the gas, it interacts more and generates more XUV photons. As an added bonus, the resultant XUV pulses have a large divergence, which means that they can be focussed to a very small spot for nanoscale imaging.

The NIR light is removed from the beam using an aluminium filter and the  XUV pulses are focussed onto a spot just 600 nm across using a small spherical mirror. The entire setup fits on a  2 m tabletop and the team was able to create XUV pulses with intensities as high as 2×1014 W/cm2. This already exceeds the performance of existing, far bulkier XUV sources. Simulations done by the team suggest that further improvements could boost this intensity by a factor of 1000.

To demonstrate their laser’s capabilities, the researchers used it to induce both two- and four-photon absorption in argon atoms. These nonlinear processes produce doubly and triply ionized states of argon respectively, with a likelihood that scales with the square of XUV intensity – making the team’s setup the most compact apparatus to date that can trigger the process reliably.

The researchers hope that the compactness and affordability of the laser could increase access to intense XUV pulses for universities, research facilities and industries worldwide. In particular, it could allow researchers to easily carry out femtosecond, or even attosecond imaging of systems ranging from electron dynamics to biomolecular reactions.

The research is described in Optica.

Classical approach extends the range of noisy quantum computers

Quantum computers can now simulate much larger quantum systems than was previously thought possible thanks to algorithms developed by researchers in the UK and Germany. The new algorithms divide up quantum computational resources according to which parts of the simulation require them most, making it possible to extract information about a large quantum system from many smaller, more manageable calculations – in effect, running the simulation in parallel. The result should boost the capabilities of the current generation of so-called noisy intermediate scale quantum (NISQ) computers, which lack the computational resources required to perform useful algorithms in materials science or drug discovery, both of which depend heavily on a deep understanding of quantum effects.

Quantum computers promise to perform complex calculations today’s classical supercomputers cannot. A bottleneck for achieving such a “quantum advantage” is that it is very difficult to engineer a large, error-free quantum computer. While upgrading the quantum technologies themselves might seem like the obvious solution, it is also possible to improve the algorithms that run on such computers – for example, by changing the way thinformation.

Representing quantum states efficiently

In the latest work, published in npj Quantum Information, researchers at University College London (UCL), the Technical University of Munich, King’s College London and the biotech company Kuano (formerly a quantum start-up called GTN) took inspiration from a set of mathematical tools known as tensor networks. Tensor network methods were originally developed to simulate quantum systems classically, and their chief selling point is that they are very efficient at storing the information needed to describe certain classes of large quantum states. They achieve this efficiency by focusing on the quantum effects that are most important (like entanglement), purposefully allocating fewer computational resources to those that are not.

By translating these methods to quantum computing algorithms, the team showed that similar advantages apply even when the processors are quantum. The approach allows computational resources to be divided, so that the separate components of the algorithms can be run in parallel on different quantum processors and their results combined at the end. This may be useful in simulating large molecules that have complicated quantum interactions only in certain regions. These regions could each be assigned a processor, with the simpler interactions between them relegated to a separate computation of their own.

Extra advantages

While the team’s approach emulates what tensor networks already do on classical computers, running such computations on quantum computers has potential advantages that could never be attained classically. This is largely because of something called the bond dimension, which is a way of modelling the amount of entanglement in a quantum state. For highly entangled quantum systems, this bond dimension becomes large and makes the computation inefficient. On a quantum computer, however, the resources required may be greatly reduced. “Tensor networks are the very best way to simulate many quantum systems on classical computers,” says Andrew Green, a condensed-matter physicist at UCL and a co-author of the study. “We expect that translating them to quantum computers will unlock a quantum advantage.”

To test their approach, Green and colleagues simulated an Ising spin chain, which is a large one-dimensional quantum system that is often used to model interesting optimization problems. The scale of this system is far beyond what has previously been achieved on a quantum computer, indicating that it is possible to extract useful information about large quantum systems from many small-scale calculations on current quantum computing hardware.

According to the researchers, a possible follow-up would be to develop algorithms to simulate two- or three-dimensional quantum systems, as well as systems that may be far less ordered than those that scientists have investigated so far. They hope that while such systems – like those with complicated forms of entanglement – are proving challenging for classical tensor network approaches, translating them to quantum computers may finally lead to a breakthrough.

Electron capture supernova spotted four decades after first predicted

Astronomers have witnessed the very first example of a third type of supernova, first predicted theoretically over 40 years ago. The international team, led by Daichi Hiramatsu at Los Cumbres Observatory, California, confirmed their observation of an electron capture supernova by identifying six key features of the explosion and its progenitor star, first predicted by Japanese astronomer, Ken’ichi Nomoto.

So far, all supernovae identified by astronomers have fallen into two categories. For progenitor stars below roughly 8 solar masses, thermonuclear explosions occur as cores become hot enough to fuse helium into carbon and oxygen. At this point, stars dramatically expel most of their outer material, leaving behind a white dwarf. For stars heavier than around 10::solar masses, electron capture by neon and magnesium causes cores to collapse. Here atomic electrons can no longer withstand such immense gravitational pressures, causing matter in the core to rapidly compress into neutron stars or black holes – expelling vast amounts of energy in the process.

In 1980, Nomoto at the University of Tokyo first predicted that a third type of supernova must also be allowed, for progenitor stars falling between 8–10 solar masses. In his theory, core collapse is again initiated as electrons are captured by neon and magnesium – but this time, the core is gravitationally supported by the resulting thermonuclear explosion. This would leave behind a neutron star in equilibrium, neither expanding nor contracting.

Promising candidate

For 40 years after Nomoto’s initial proposal, uncertainties in his theoretical predictions of the signatures of such supernovae meant that no clear evidence for electron-capture supernovae was ever confirmed. Yet in March 2018, a promising candidate emerged in the galaxy NGC2146, roughly 31 million light–years away. Over the next two years, the subsequent rise and fall in brightness of the explosion was observed by a global network of telescopes. From archival images taken by both the Hubble and Spitzer Space Telescopes, Hiramatsu’s team also identified the most likely progenitor for the supernova.

Where previous observations had displayed a few of the indicators for electron-capture supernovae first predicted by Nomoto, Hiramatsu and colleagues identified all six of them for the first time. These were that the supernova’s progenitor star was on the super-asymptotic giant branch of main sequence stars; that it underwent strong mass loss prior to going supernova; it displayed spectral features which cannot be seen in either white dwarf or core collapse supernovae; it was a weak explosion, consistent with the properties of equilibrium; it emitted little radioactivity, and left behind a neutron-rich core.

By gathering clear evidence for all six factors, the team could finally confirm Nomoto’s predictions after over four decades. Their identification now provides new insights into stellar evolution, supernova physics, and the synthesis of heavy elements. It could also shed new light on the origin of the Crab Nebula, which first appeared in the sky in 1054. Similarities between the team’s results, and meticulous observations of Chinese astronomers at the time, now raise confidence that the famous nebula itself could have been created in an electron-capture supernova.

The observations are described in Nature Astronomy.

How Stephen Hawking became the world’s most famous physicist

With a title like Hawking Hawking: the Selling of a Scientific Celebrity, science writer Charles Seife must have known his new book was going to cause a stir. Indeed, for reasons unknown to Seife himself, his biography of the late cosmologist Stephen Hawking has so far not been able to find a UK publisher. I don’t know why either, but it wouldn’t surprise me if none of them wanted to be part of an attack on a beloved national icon. And the book does claim to be an attack – with publicity material and a video trailer asking whether Hawking’s true talent was not physics, but self-promotion. So I opened this book with more than a little curiosity about the contrast between who Hawking was and who the public believed him to be.

Hawking Hawking begins with a chapter entitled “Next to Newton”, which describes both the position Hawking occupies in the public psyche and his physical burial place in Westminster Abbey in London, where his ashes were interred following his death in March 2018 at the age of 76. From here, it tells Hawking’s story going backwards in time, in search of the person behind the persona, “as the accumulated layers of celebrity and legend are stripped away”.

It is true that a lot of myths built up about Hawking over the years, many of which he happily indulged. Seife describes how Hawking, unlike most academics, actively courted the media, especially towards the end of his life. Anecdotes I particularly enjoyed include his jokey mathematical derivation of the optimal conditions for the England football team’s success in the 2014 World Cup, which he presented for the gambling chain Paddy Power, and his appearances on popular TV shows such as The Simpsons.

In some of these media stunts, including the latter, Hawking was portrayed as being in the league of (or even cleverer than) Einstein. But, as Seife notes, most physicists would not rate him quite that highly, his brilliance and contributions notwithstanding. Hawking himself dismissed such comparisons, saying they were “media hype”, but, given his enthusiastic participation in stunts that encouraged these associations, I had to wonder how authentic Hawking’s attempts at modesty really were.

Despite the book’s claim to be an exposé of how Hawking crafted his public image, the accounts of these and other publicity stunts are mostly relegated to the early chapters. The rest of Hawking Hawking recounts stories about his personal and academic life, which was mostly spent at the University of Cambridge. To describe “Hawking the human”, Seife draws on many sources, including the warts-and-all memoirs of his first wife Jane, interviews with people who knew him, and other books written about him such as Hawking Incorporated: Stephen Hawking and the Anthropology of the Knowing Subject by philosopher and anthropologist Hélène Mialet.

These personal stories are interspersed with long scientific passages explaining Hawking’s work on the nature of black holes and his attempts to unify the theories of general relativity and quantum mechanics. Seife also describes much of the related research done by Hawking’s contemporaries. After all, to discuss whether Hawking’s science was as significant as people believe, Seife has to explain that science and put it in context. It proved impossible, however, to disentangle the many aspects of Hawking’s life and personality, with the result that the scientific sections often lead into stories (sometimes less than flattering) about what Hawking might have been like to work with – or against.

The one I found most shocking detailed how, in 1999, Hawking repeatedly tried to have a PhD student called Andrew Farley kicked off the course at Cambridge, apparently because Farley’s research threatened Hawking’s belief that information is destroyed in black holes (which Hawking later changed his mind about). The university ignored Hawking, and Farley completed his PhD, but Farley’s confidence was badly knocked. Having previously admired Hawking since childhood, Farley said that the common advice to “never meet your heroes” was particularly apt. The incident also resulted in Hawking ending his long friendship with Farley’s supervisor Peter D’Eath, who had been Hawking’s own graduate student many years earlier, and never spoke to him again.

While this story doesn’t demonstrate Hawking “hawking” himself to the media, it does make me wonder about the interplay between his fame and his character. Was it, in other words, because of his notoriety that he felt entitled to act beyond his power in this way? The book offers some speculation on this possibility, noting that it was shortly after Hawking became famous with A Brief History of Time in 1988 that his marriage to his first wife Jane started falling apart. But even if Hawking’s fame did feed some of his less-endearing characteristics, it becomes clear as the book goes backwards in time that they were there from the beginning. For example, his public contempt for other subjects such as philosophy and theology were foreshadowed by his disdain and lack of support for Jane Hawking’s bona fide research in the field of medieval studies.

It’s worth underlining, though, that the book also includes stories that paint Hawking in a better light. He often gave large sums of money (including his fee for the Paddy Power stunt) to charity, despite not always having a lot to spare, due to the expense of his round-the-clock medical care. He also asked to have meetings set up with local children with disabilities whenever he was travelling. Contrary to the notion that he cared more about self-promotion than anything else, these meetings were never publicized. Seife addresses Hawking’s own disability openly and considers how it affected all aspects of his life. One heart-rending story quotes Hawking as saying that, when his children were young, he “missed not being able to play with them physically”.

Despite his condition, Hawking never appeared to feel sorry for himself, according to several acquaintances quoted by Seife. However, in an added complexity, his admirable traits and his character flaws may sometimes have been two sides of the same coin. His stoicism was certainly remarkable, but perhaps it went hand in hand with a stubborn refusal to give “an admission of defeat”. This made him delay getting external help, even when, in 1976, his wife Jane and their then nine-year-old son Robert were overburdened with caring for him.

If Hawking himself were to read this book, perhaps he would bristle most at the suggestion that his disability played a part in his becoming a scientist superstar – “the likes of which hadn’t been seen since Einstein”. It’s clear that he wanted to be known for his science – despite his disability, not because of it. But if he wasn’t the world’s greatest physicist (or its best communicator) in reality, then why did he achieve this level of fame?

It appears that Hawking was given to attention-drawing antics even before he was famous

It appears that Hawking was given to attention-drawing antics even before he was famous. In one dramatic story from his time as a PhD student under the supervision of Dennis Sciama at Cambridge, Hawking attended a lecture given by the eminent astrophysicist Fred Hoyle at the Royal Society. Hoyle had been presenting his new work for the first time but once the seminar was over, Hawking stood up and pointed out a flaw in Hoyle’s thinking in front of 100 other scientists. It was a memorable performance that gave the impression that he had done the calculations in his head on the spot. Unbeknown to the audience, however, Hawking had seen an early draft of the paper, which his officemate had been working on. Interestingly, when Hawking started his PhD, he had actually wanted Hoyle to be his supervisor, but Hoyle had rejected him. So was this stunt driven by some resentment he harboured? Or would he have done it anyway?

Hawking certainly wasn’t the first physicist to employ such tactics. Another celebrity physicist with dramatic tendencies was Richard Feynman, with whom Hawking has some intriguing parallels. According to one acquaintance cited in this book, Feynman was described as “always concerned with generating anecdotes about himself”. The two physicists got to know each other when Hawking spent a year-long fellowship in 1974–1975 at Caltech, where Feynman was based. Some of their interactions during that time are detailed in a chapter titled “Black swan”.

Both Hawking and Feynman received attention for their unashamed enjoyment of strip clubs and from their friendships with billionaires (who had often made their money from morally questionable and environmentally damaging activities). Today, these attitudes have aged poorly: the consequences of damaging the environment are becoming painfully clear, and it is now better understood that physics should not be made to seem like a boys’ club where women are unwelcome. In their own time, however, both physicists seemed to enjoy the notoriety these associations brought them.

Perhaps Hawking achieved a greater level of fame because the media expectations of celebrity were changing, and he arrived on the scene at the right time (he was 20 years Feynman’s junior). Or perhaps it was because his publishers used the human story around his disability to market his books. It is likely not possible to give a definitive answer, but Seife’s exploration results in a deeply insightful biography of a flawed human being who lived an extraordinary life.

As for Seife’s decision to structure the book in reverse chronological order, I am not convinced. It’s a clever idea, especially given that Hawking’s research played with reversing the arrow of time, but it doesn’t lend itself well to describing the science. Early chapters describe Hawking’s later-career research, but it can only be understood by giving some background of what came before. You’re often therefore faced with a second explanation in a later chapter, when you reach the point where that research actually happened. So in order to avoid repetition, the first description is often short, but with a comment in brackets promising more in a later chapter. It’s all rather disorientating.

The structure does, however, lend itself to the human story, and I found the final chapters (about Hawking’s early life) markedly poignant, knowing already all that was to follow in his life. I was left with the feeling that Hawking Hawking is not really an exposé; it is less explosive and more sympathetic to its subject than the publisher’s description led me to expect. Is it too cynical of me to wonder if this was a publicity stunt of their own?

  • 2021 Basic Books 400pp $30hb

Synchrotron reveals how a dinosaur breathed, how fireflies coordinate their flashing

If you watch films like Jurassic Park you might think that scientists have a pretty good idea about the anatomy and behaviour of dinosaurs. In reality, crucial things like how the creatures breathed is a matter of debate.

Now, an international team of researchers has gained important insights into dinosaur breathing by using light from the European Synchrotron Radiation Source (ESRF) to scan the entire fossilized body of Heterodontosaurus tucki — which was a small plant-eating dinosaur.

Discovered in the Eastern Cape region of South Africa, the specimen is one of the most complete dinosaur fossils ever found. Using data from the ESRF, the team was able to work out how the dinosaur breathed. Scientists had assumed that all dinosaurs had lungs that functioned like those of birds. However, this study shows that Heterodontosaurus used a different mechanism. The creature had paddle-shaped ribs and small, toothpick-like bones, and it expanded both its chest and belly in order to breathe.

Breathing advantage

Heterodontosaurus was an early member of an evolutionary-successful group of dinosaurs that included triceratops, stegosaurus and duckbilled dinosaurs. The researchers believe that this new way of breathing may have given them an advantage over other dinosaurs.

The study is described in eLife.

One thing that I do miss from the North American summers of my youth are fireflies. Although I have never witnessed the phenomenon firsthand, large swarms of fireflies sometimes flash on and off in unison – a behaviour that has long puzzled biologists.

Now Raphaël Sarfati and colleagues at the University of Colorado have observed fireflies in Tennessee at the height of the mating season to try to understand why this synchronized flashing occurs.

They collected video using two 360°cameras to observe swarms of fireflies over several nights. They observed flashing at a frequency of about 2 Hz and in bursts that last 10–20 s. They found that the flashing switches on when the swarm reaches a certain density of insects.

The flashing is done by the males and the team concluded that the fireflies coordinate their flashing with both nearby and distant peers. They also found that the shape and size of flashing swarms is determined in part by local terrain, suggesting that the fireflies must be able to see each other to coordinate their flashing.

Their study is described in Science Advances.

 

 

Deflecting asteroids and exploring a metal world

You could be forgiven for thinking the themes in this month’s episode of Physics World Stories have been stolen from Hollywood. Podcast host Andrew Glester profiles two upcoming NASA missions to asteroids: one that will explore an all-metal world, and the other will deliberately smash into a near-Earth asteroid.

Glester’s first guest is Jim Bell from Arizona State University who is involved in the mission to the asteroid Psyche, which launches in 2022 and arrives in 2026. Located in the asteroid belt between Mars and Jupiter with an average diameter of 226 km, Psyche consists largely of metal. Astronomers speculate that the asteroid is the exposed core of an early planet that lost its rocky outer layers due to a number of violent collisions billions of years ago.

Also joining the podcast is Angela Stickle from the Johns Hopkins University Applied Physics Laboratory. Stickle is a project scientist in the Double Asteroid Redirection Test (DART) mission, scheduled to launch in November aboard a SpaceX Falcon 9 rocket.

Sounding like a remake of Armageddon or Deep Impact, the solar-powered DART craft will hurtle towards the binary near-Earth asteroid Didymos, before crashing into the smaller of the two bodies in late 2022. By observing the changes in the asteroid’s orbit, mission scientists are testing the feasibility of deflecting a large Earth-bound asteroid – should that perilous scenario transpire in the future.

Elemental forms of metals discovered in brains of Alzheimer’s patients

Approximately 10 metals occur in the human body naturally as chemical compounds that are stored and used by tissues. Copper and iron oxides, in particular, are required for cellular activities throughout the body.

When the body mishandles or incorrectly processes these copper and iron oxides, however, tissue damage – especially in the brain – can occur. Researchers have suspected that pathological structures in the brain called amyloid plaques, which have been implicated in Alzheimer’s disease, might be where some of this damage happens.

An unexpected discovery detailed in Science Advances may support this damage-centred hypothesis.

Researchers have found elemental copper and iron deposits alongside copper and iron oxides in the brains of two individuals who died with Alzheimer’s disease.

“We were not expecting to see the elemental forms [of copper and iron],” says Neil Telling, a professor of biomedical nanophysics at Keele University and senior author on the study. “It clearly suggests that there’s more to understand about brain neurochemistry than we’d originally imagined.”

Seeing metals at nanoscales

To locate, identify, map and characterize the elemental metal deposits, the researchers obtained high-resolution images and chemically sensitive measurements with a technique called X-ray spectromicroscopy, a non-destructive method that has been used in environmental studies and to analyse synthetic materials at nanometre scales.

In this powerful measurement technique, a circular particle accelerator called a synchrotron generates multi-energy, or polychromatic, X-rays, from which low-energy X-rays are selected and focused into a beam. The focused X-ray beam, only 20 nm in diameter, is then moved across amyloid plaque samples to create a stack of images.

Plaque analysis

Each image in a stack corresponds to a different X-ray energy. By combining these different images, researchers can obtain absorption spectra from different regions of the amyloid plaques and then analyse the spectra to identify metals.

“With this technique, we can vary the energy of the X-rays and look at the absorption of the X-rays to look at characteristic spectral features of these materials, which will tell us which elements we’re looking at and their oxidation state,” explains Telling.

The researchers can also test the magnetic properties of the samples using circularly polarized X-rays.

Through their experiments, which were performed at the UK’s Diamond Light Source and the US Department of Energy’s Advanced Light Source, the researchers observed nanoscale deposits of elemental copper and iron that were around ten-thousandth the size of a pinhead. This is around half the size of lysosomes, the parts of cells that help break down and remove cellular waste.

What’s next?

The researchers think that the elemental deposits may have been formed during chemical reactions taking place in the amyloid plaques, primarily because the elemental copper and iron appeared next to their oxide forms in the plaques that the researchers tested.

But Telling cautions that there’s much more work to do before they can say anything definitive about the role these metals play in neurodegenerative diseases like Alzheimer’s.

“It will take a number of years before we can say categorically whether we see metals in elemental forms just in Alzheimer’s plaques, for example, or if we see this in other areas of tissue as well,” Telling says. “But there’s certainly a possibility that this could indicate that there are some redox [reduction–oxidation] chemical reactions going on in the brain that are even more aggressive than we originally imagined and could be linked to disease progression.”

Is the quantum Internet finally here?

If you’ve ever attended the premiere of a film or an event where a new type of car is presented, you’ll know that there’s a slight buzz of excitement that comes from not knowing what to expect. I’ve been feeling that buzz for the past two weeks, leading up to an event that promised a “world premiere live demonstration of the next step in quantum cryptography”. It was not that the whole event was clouded in mystery, as some small teasers were released leading up to it. Still, with all the recent developments on the quantum Internet, I was left wondering: what exactly is the “state of the art” they plan to demonstrate?

The event, which took place on Tuesday this week, was organized by a group of institutes and companies involved in quantum cryptography in the Netherlands. Amongst them were QuTech, a collaboration of the Delft University of Technology and the Dutch Institute of Applied Scientific Research (TNO). Another was Quantum Delta NL, a Dutch umbrella organization for all things quantum that recently became a subsidiary of a €615m government investment to advance quantum technology in the Netherlands. The Dutch telecoms firm KPN and the Internet hardware provider Cisco were involved, too.

So what happened at the event? The demonstration aimed to show how three quantum network points (nodes) can be connected via an optical fibre network, using one node as the central point. Due to the quantum nature of the connection, if anybody tried to spy on the communication, this would immediately be noticed. But while it’s exciting to have a functional quantum network like this, similar things have already been demonstrated in other labs around the world.

The part where it gets impressive is that the nodes were stand-alone devices. As part of the demonstration, two nodes were literally put in a van and driven to two neighbouring cities approximately 25 kilometres apart. The three nodes were then connected via the existing optical fibre network. To show that the system worked, the demonstrators generated a quantum key, used it to encrypt a video at one end of the network, sent the video over the network and then decrypted and watched it at the other end.

The network still lacks some crucial building blocks needed to make it scalable, such as quantum repeaters. Even so, we could already see a super-secure Internet in action.

From a science communication point of view, the demonstration was very nice to watch, with a good mix of animations and discussion panels, topped up with the live demonstrations (as you can see for yourself in the recording). This made it understandable and appealing to a varied audience. I also liked the way that academic research and industry really came together. It was different from demonstrations I’ve seen before about scientific advancements that still require a decade before they might become reality. Here, interested users could already come forward to help improve the technology from a user perspective. It shows the push to get this technology out of the lab and into our daily lives.

So, is the quantum future here already? No, not yet. But I did get a glimpse of what it looks like. And as a quantum enthusiast, I have to say I like what I saw.

Metallic foams for face masks, why the UK needs an X-ray free electron laser

Wearing a face mask is a part of daily life for many of us; but how much do we know about the physics behind how they work? In this episode of the Physics World Weekly podcast, Kai Liu at Georgetown University explains why a nanoporous metallic foam that he has developed could lead to masks that offer better protection from diseases such as COVID-19.

Big science facilities such as X-ray free electron lasers (X-FELs) play an important role in the development of new materials – indeed they support a broad range of science from physics and chemistry to biology and medicine. Adam Kirrander of the University of Edinburgh and Jon Marangos of Imperial College London join the podcast to argue the case for building an X-FEL in the UK.

Also in this podcast, Physics World editors talk about three of their favourite stories from this week: an ion clock in space; a vanishing pacemaker; and a LEGO microscope.

Optimal size for wind farms is revealed by computational study

Optimizing the placement of turbines within a wind farm can significantly increase energy extraction – but only until the installation reaches a certain size, researchers in the US conclude. This is just one finding of a computational study on wind turbines’ effects on the airflow around them, and consequently the ability of nearby turbines – and even nearby wind farms – to extract energy from that airflow.

Wind power could supply supply more than a third of global energy by 2050, so the researchers hope their analysis will assist in better designs of wind farms.

It is well known that the efficiencies of turbines in a wind farm can be significantly lower than that of a single turbine on its own. While small wind farms can achieve a power density of over 10 W/m2, this can drop to a little as 1 W/m2 in very large installations The first law of thermodynamics dictates that turbines must reduce the energy of the wind that has passed through them. However, turbines also inject turbulence into the flow, which can make it more difficult for downstream turbines to extract energy.

“People were already aware of these issues,” says Enrico Antonini of the Carnegie Institution for Science in California, “but no one had ever defined what controls these numbers.”

Fluid dynamics

In the new research Antonini and colleague Ken Caldeira used fluid dynamic models to examine the airflow over turbines. They considered multiple, idealized simulations of both small and large wind farms.

First, they looked at the individual wakes of each turbine, studying different arrangements. Reducing the turbine density in small wind farms provided a higher output per turbine, they found, but they also found that arranging the turbines in rows facing the wind or in a mosaic pattern provided a 56% greater power output than arranging them in wind-facing columns. “Gains in energy generation can be 10, 20, even 30% if you have an optimal arrangement of turbines with respect to a random arrangement as you can reduce this wake interaction between the turbines,” says Antonini.

These gains largely vanished, however, as wind farms grew very large. To find out why, the duo used meteorological simulations of wind speed. They found that, whereas in a small wind farm the principal cause of loss of output of a wind turbine is the wake of its near-neighbours, which can be mitigated by careful design, in a large wind farm the surface wind speed is slowed down by the greater drag of that region.

Uniform wake

“For large wind farms, there is a limit on how much energy can be replenished on the order of 1 W/m2,” says Antonini. “[The turbines] create a kind of uniform wake across the wind farm.” The wakes of these large farms could extend tens of kilometres downstream, potentially affecting other wind farms.

The transition between a wind farm in which turbine arrangement was significant factor and one in which it was not occurred after a farm reached about 30 km in size. There was no clear dividing line between the two regimes, however, and several factors such as the wind speed in the upper atmosphere and, curiously, the latitude of the wind farm affecting the calculations. Wind farms closer to the equator were, in general, able to grow larger before efficiency tailed off to a minimum because the Coriolis effect caused by the rotation of the Earth relative to the wind farm replenished the energy and momentum of the wind in the wake.

“That was something no one was aware of, to my understanding,” says Antonini. The researchers are developing this concept further to help energy system planners build better wind farms for the future, says Antonini.

Charles Meneveau of Johns Hopkins University in Maryland welcomes the research. “In 2010, we wrote several papers on that infinite wind farm regime, and we did some simulations using different techniques,” he says; “We had an inkling there was going to be an important limit to consider, but at that time that idea was very controversial. This [research] takes that more seriously and puts that in a computational modelling tool that is well established and has been well tested, and really does the work of simulating those length scales.” He concludes that “we’re talking about building such large installations with such a change in our infrastructure that, considering the scale of the stuff, this is really important.”

The research is described in Proceedings of the National Academy of Sciences.

Copyright © 2026 by IOP Publishing Ltd and individual contributors