Skip to main content

Self-powered wearable yarns can sense temperature and mechanical strain

A multipurpose material that can sense strain and temperature and harvest energy from temperature gradients has been developed by researchers in the UK and the Netherlands. The new material, which could be used to create smart human–machine interfaces and health monitoring devices, was created by Emiliano Bilotti and collaborators at Queen Mary University of London, Imperial College London, Eindhoven University of Technology and Loughborough University.

Current wearable sensors typically have limited mechanical flexibility or require a stiff battery to work. In this research, Bilotti and colleagues have discovered that commercially available Lycra yarns, a flexible material commonly used in textiles, can be modified to show thermoelectricity and strain sensitivity. This is done by adding the conductive copolymer poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS).

Using the new material, the team developed a device that can operate in three different modes to sense strain, measure temperature differences or harvest energy. The strain sensitivity can be useful for creating gloves that track hand movements and the thermoelectric property of the material could be used to power such a glove by harvesting energy from the difference between body temperature and the surrounding ambient temperature.

Low-cost fabrication

The Lycra yarns are given strain sensitivity and thermoelectric properties by immersing them in a solution containing PEDOT:PSS. Upon evaporation of the solvent, the conductive copolymer attaches to the surface of the Lycra yarns, conferring electrical conductivity on the material.

The researchers noticed that, by applying a high strain on the coated fibres, they could induce the formation of cracks across the surface of the PEDOT:PSS coating. These cracks increase the total surface area of the yarns, and provide the strain sensitivity. This is because they create interconnected patches of the conductive copolymer, which separate with the application of a strain.

Increasing the separation between the isolated patches decreases the electrical conductivity of the material, allowing it to be used as a strain sensor. As a result, the material exhibits a large change in resistance even when the sensors are deformed using a low strain of 1%. However, the team found that the thermoelectric properties of the coated yarns are not influenced by strain or the cracks – which do not interfere with the ability of the sensor to measure temperature differences.

Relative temperature

Since the temperature sensitivity of the yarns is based on thermoelectricity, they are not able to sense absolute temperature values. Instead, the material can measure temperature differences as small as 7 °C. This could be used to measure the relative temperature of the human body with respect to the surrounding environment. In addition, the temperature gradients generate a voltage difference due to the thermoelectric effect, which can be used to create electric power.

As a proof of concept, the researchers sewed the yarns onto a glove, and used them to sense the temperature of an object relative to that of the hand. Due to the small temperature differences between a human hand and the environment (around 10 °C), the team calculates that a glove containing 1800 strands of yarn could power the necessary electronics for its practical use. The material could also be used to measure the strain of the glove. This would be useful for creating self-powered wearable devices where, for example, the hand position and its temperature could be measured autonomously.

The findings are reported in Materials Horizons.

Competitive not cut-throat: what baseball’s Ted Williams tells us about physicists’ instincts

Ted Williams used to give away his secrets. The Boston Red Sox baseball player, who was one of the greatest hitters of all time, would call members of opposing teams to give them tips; he even gave advice to opposing pitchers. Williams, who died in 2002 aged 83, eventually published all his advice in a book, The Science of Hitting, a copy of which sits on my shelves.

I was reminded of Williams by a letter in response to my May column about physics teachers and students who participate in BattleBots, a TV show about robots that compete to smash each other up. “Do we have to glorify destruction,” the author wrote, “to get people interested in physics?”

It was a very good question. The answer, I think, has to do with the nature of the competition, and the role of the destruction.

Duelling accelerators

Science can be as competitive as sport. In the 1950s, for example, scientists at the Brookhaven National Laboratory and CERN were each building a new, more powerful kind of accelerator based on the principle of alternating gradient magnetic fields. Each wanted their machine online first, thereby getting a head start in exploring a new energy region, which could mean prizes, prestige and further funding.

But did the Brookhaven and CERN teams each work under a cone of silence? No. Like Williams, they shared all their tricks, including blueprints, plans and strategies. They exchanged personnel, each lab sending a prominent physicist to the other to help, learn and report back. Brookhaven scientists, in fact, had discovered the alternating gradient principle while brainstorming for ideas to help their CERN colleagues, and promptly handed it over.

Outsiders often paint accelerator-building contests as cut-throat races driven by a thirst for Nobel prizes and personal ambition. From the inside, though, I’ve found it different. Sure, prizes are nice, but moving physics forward is the thing. In fact, these two examples from baseball and accelerator construction illustrate the difference between what I call “political” and “performance” competition.

In political competition, the aim is simply to win – an election, say, or a military encounter. The stronger your opponent, the weaker you are. In performance competition – the kind I generally see in sports and physics – the aim is to improve both your performance and that of the entire community. The stronger your opponent, the stronger you can become.

Build, fail, rebuild

Back to BattleBots. If you only watch it on the Discovery channel you can easily come away with the impression that it’s a political competition. “Fight! Fight! Fight!” the crowd roars. “I’m going to pulverize you!” screams Martin Mason, the physics teacher whose students built the Mad Catter robot, at his opponents. It’s true that he shouts and looks mean, but that’s only posturing for the TV audience – for those who pay the bills.

If you go backstage and roam around the BattleBots “pit area”, you find an utterly different atmosphere. Members of one team are making parts for another, fixing each other’s robots, lending instruments and sharing information. BattleBots veterans help newcomers with their drive trains, wheels, weapons and software. When you ask the men and women what brought them there, nobody says “I like to tear things apart!” They’ll tell you they came not to win but to learn.

Build, fail, learn, rebuild – that’s a powerful learning experience for future engineers and experimental physicists. The goal’s a good fight. It’s not exciting if the other robot doesn’t work; the thrill is taking on a robot that’s equal to or better than yours. Your opponent’s robot is less an antagonist than the acid test of your own skills. Defeat means that you can learn more.

Mason teaches in an engineering department at Mt San Antonio Community College, outside Los Angeles, which serves tens of thousands of minority and lower-income students. Most will take only one physics course, and he needs a quick and effective way to engage his students, focus their attention on physics, and teach how to implement its basic principles. Combat robotics, he found, is what he needs.

“Learning how stuff breaks is just as valuable an engineering exercise as learning how stuff works,” Greg Munson, a co-creator of BattleBots told me. “We see BattleBots as more Apollo 13 than Mars rover. It’s not a task-based engineering exercise. It’s more ‘Our spaceship is on fire and we’re going to die – SURVIVE!’ ”

At the end of each season (this year’s is due to be broadcast on the Discovery channel in late November or early December) the show hands out awards. The prize for “Most destructive robot” is a giant bolt made of aluminium, while for the overall winner it’s a giant nut. Bolt and nut – symbols for structure and stability. No wonder NASA has praised BattleBots for inspiring students to get into STEM education.

The critical point

The word “competition” comes from the Latin com + petere, to “seek together”. Competition is about individuals and groups engaging with each other to achieve things that they could not achieve solo. Not always, of course. The dark side of competition is that it can degenerate into selfishness, me-first and cheating, and it often seems that way to outsiders. But at its best, competition serves self-knowledge – who you are, and what you can do and know.

At its best, competition serves self-knowledge – who you are, and what you can do and know

Vince Lombardi, coach for the Green Bay Packers’ American football team, is famous for saying “Winning isn’t everything. It’s the only thing.” That may seem like a “macho”, alienated view of competition, but I read Lombardi’s frequently repeated slogan as a commanding call for his players: not to cheat, but to double down on performance abilities. The simple desire to win does nothing to improve your ability to play.

Ted would have understood.

Astronomers define new class of potentially habitable ocean worlds

Hot, ocean-covered exoplanets with hydrogen-rich atmospheres could harbour life and may be more common than planets that are Earth-like in size, temperature and atmospheric composition. According to astronomers at the University of Cambridge, UK, this newly defined class of exoplanets could boost the search for life elsewhere in the universe by broadening the search criteria and redefining which biosignatures are important.

Astronomers define the habitable or “Goldilocks” zone as the region where an exoplanet is neither too close nor too far from its host star to have liquid water on its surface – water being the perfect solvent for many forms of life. Previous studies of planetary habitability have focused primarily on searching for Earth-like exoplanets and evidence that they could harbour the kind of chemistry found in life on Earth. However, it has so far proven difficult to detect atmospheric signatures from Earth-like planets orbiting Sun-like stars.

Potentially habitable mini-Neptunes

Larger exoplanets are easier to detect than smaller, Earth-sized ones, and exoplanets around 1.6‒4 times bigger than the Earth, with masses of up to 15 Earth masses and temperatures that in some cases exceed 2000 K, are relatively common. These planets are known as mini-Neptunes as they are similar to the ice giant planets in our solar system.

Previous studies suggested that the high pressures and temperatures beneath these planets’ hydrogen-rich atmospheres were incompatible with life. However, based on their analysis of an exoplanet called K2-18b, exoplanet scientist Nikku Madhusudhan and colleagues at Cambridge say that life could, in fact, exist on a subset of mini-Neptunes that meet specific criteria.

This subset, which the researchers dub “Hycean” (hydrogen + ocean) planets, consists of planets that have radii up to 2.6 times larger than Earth’s and are capable of harbouring vast oceans under atmospheres dominated by molecular hydrogen and water vapour. Such oceans could cover the whole planet and reach depths greater than the Earth’s oceans, and the researchers say that the conditions within them could be compatible with some forms of Earth-based microbial life. Hycean planets tidally locked with their host star could also exhibit habitable conditions on their permanent night side.  

Widening the Goldilocks zone

Crucially, the researchers say that Hycean planets could be habitable even if they receive much less radiation from their host star than the Earth does from the Sun. This makes their Goldilocks zone far wider than it is for Earth-like planets, increasing the probability that life could exist and be detected there.

To determine whether an exoplanet qualifies as Hycean, Madhusudhan explains that the researchers must consider several factors, including mass, temperature and atmospheric properties. Once they determine that a Hycean candidate lies in the habitable zone for its host star, the next step is to search for molecular “signatures” that reveal its atmospheric and internal structures. This information might then be used to infer surface conditions such as whether the planet contains an ocean or signs of life. In the current study, published in The Astrophysical Journal, the researchers suggest that molecules such as methyl chloride, carbonyl sulphide, dimethyl sulphide and others that are produced by metabolic processes of microorganisms on Earth would all be relatively easy to detect on Hycean planets using spectroscopy.

The researchers claim that the results of this study increase the likelihood that such biosignatures could be found within the next two or three years. Several of the Hycean candidates they identified could, they suggest, be further investigated with instruments such as the James Webb Space Telescope (JWST) due to be launched later in 2021, as well as the ground-based Extremely Large Telescope (ELT). “Currently we don’t have any concrete evidence to estimate the prevalence and nature of life in the universe,” Madhusudhan says. “But we do think that there are a large number of potentially habitable exoplanets which can in principle host life.” Future observations with the JWST would, he adds, enable astronomers to characterize these planets’ atmospheres and detect biosignatures “if present.

Deep learning model automates brain tumour classification

Brain tumour classification

When it comes to diagnosing brain cancer, biopsies are often the first port of call. Surgeons begin by removing a thin layer of tissue from the tumour and examining it under a microscope, looking closely for signs of disease. However, not only are biopsies highly invasive, but the samples obtained only represent a fraction of the overall tumour site. MRI offers a less intrusive approach, but radiologists have to manually delineate the tumour area from the scan before they can classify it, which is time consuming.

Recently, scientists from the US developed a model capable of classifying numerous intracranial tumour types without the need for a scalpel. The model, called a convolutional neural network (CNN), uses deep learning – a type of machine learning algorithm found in image recognition software – to recognize these tumours in MR images, based on hierarchical features such as location and morphology. The team’s CNN could accurately classify several brain cancers with no manual interaction.

“This network is the first step toward developing an artificial intelligence-augmented radiology workflow that can support image interpretation by providing quantitative information and statistics,” says first author Satrajit Chakrabarty.

Predicting tumour type

The CNN can detect six common types of intracranial tumour: high- and low-grade gliomas, meningioma, pituitary adenoma, acoustic neuroma and brain metastases. Writing in Radiology: Artificial Intelligence, the team – led by Aristeidis Sotiras and Daniel Marcus at Washington University School of Medicine (WUSM) – says that this neural network is the first to directly determine tumour class, as well as detect the absence of a tumour, from a 3D magnetic resonance volume.

To ascertain the accuracy of their CNN, the researchers created two multi-institutional datasets of pre-operative, post-contrast MRI scans from four publicly available databases, alongside data obtained at WUSM.

The first, the internal dataset, contained 1757 scans across seven imaging classes: the six tumour classes and one healthy class. Of these scans, 1396 were training data, which the team used to teach the CNN how to discriminate between each class. The remaining 361 were subsequently used to test the performance of the model (internal test data).

Satrajit Chakrabarty

The CNN correctly identified tumour type with 93.35% accuracy, as confirmed by the radiology reports associated with each scan. What’s more, the probability that a patient actually had the specific cancer that the CNN detected (rather than being healthy or having any other type of tumour) was 85–100%.

Few false negatives were observed across all imaging classes; the probability that patients who tested negative for a given class did not have that disease (or were not healthy) was 98–100%.

Next, the researchers tested their model against a second external dataset containing only high- and low-grade gliomas. These scans were sourced separately to those in the internal dataset.

“As deep-learning models are very sensitive to data, it has become standard to validate their performance on an independent dataset, obtained from a completely different source, to see how well they generalize [react to unseen data],” explains Chakrabarty.

The CNN demonstrated good generalization capability, scoring an accuracy of 91.95% on the external test data. The results suggest that the model could help clinicians diagnose patients with the six tumour types studied. However, the researchers note several limitations to their model, including misclassification of tumour type and grade due to poor image contrast. These shortfalls may be caused by inconsistencies in the imaging protocols used at the five institutions.

Looking to the future, the team hopes to train the CNN further by incorporating additional tumour types and imaging modalities.

Fast quantum random number generator fits on a fingertip

Smartphones could soon come equipped with a quantum-powered source of random numbers after researchers in China developed a quantum random number generator (QRNG) chip small enough to sit comfortably on a fingertip. What’s more, the new integrated photonic chip generates random numbers at rate of 18.8 gigabits per second – a record-high rate that should allow the generator to interface with the ever-increasing speed of Internet communications.

Random numbers are useful in cryptography and computer simulations, among other applications. For example, cryptography needs a source of true random numbers that a sophisticated adversary or eavesdropper cannot predict or manipulate. Similarly, true randomness ensures that computer simulation techniques, like the ones used for predicting weather or modelling protein molecules, produce accurate results.

For most purposes, wherever a high degree of randomness is not necessary, pseudo-random numbers suffice. These numbers appear random, but they are actually part of a sequence generated by a formula using a so-called seed number. This means that if hackers learn the seed number, they can predict the entire sequence of numbers, thus eliminating any randomness.  A security protocol based on pseudo-random numbers is therefore weak since hackers might be able to guess the keys used for encryption.

On the other hand, true random numbers – ones that cannot be guessed or anticipated – are very hard to generate because they need a truly unpredictable origin. In their quest for true randomness, researchers have even turned to measuring cosmic radiation and observing patterns in volcanic lamps.

Turning quantum noise into usable randomness

Thankfully, there is also a more accessible source of true randomness: quantum superpositions. In quantum mechanics, a wavefunction can be in a superposition of many states and performing a measurement randomly collapses the wavefunction into one of those states. An electron, for example, can be in an equal superposition of “spin up” and “spin down” states, and measuring the electron gives one of the two spin states with 50% probability. It is almost like a coin-toss, except that a coin-toss is not really random since you can predict whether the coin lands heads or tails if you accurately account all forces on the coin (flick of the finger, wind movements, and so on). On the other hand, the measurement outcome of an electron in a superposition is simply unpredictable.

The QRNG developed by researchers at the University of Science and Technology in Hefei and Zhejiang University in Hangzhou uses a setup in which a reference laser beam is split into two and the intensity of the outgoing beams is measured using ultra-fast indium-gallium-arsenide photon detectors. The difference in the two intensities is affected by fluctuations in the so-called vacuum state, which is a quantum state that contains zero photons but still has some residual energy.  When you try to measure the properties of this vacuum state, such as the magnitude of its electric field, Heisenberg’s uncertainty principle guarantees that the results will, in theory, be random numbers picked from a normal distribution.

In practice, however, classical noise creeps in, giving the measurements unwanted bias and correlations that could help a hacker guess the generated numbers. To avoid this, the researchers used classical algorithms to remove unwanted correlations during post-processing, leaving only the true random numbers behind. These numbers were then transmitted to a personal computer where they passed a suite of tests showing that they are indeed random.

Random things in small packages

The security of small electronic gadgets has become more critical than ever due to the rise of the “Internet of Things” in which devices such as home appliances are connected to the Internet. QRNGs can protect such devices by supplementing conventional cryptography with truly random numbers. But to be useful, generators need to be fast enough to communicate over Wi-Fi or broadband internet.  For this, a fast source of randomness is not enough on its own: the supporting photonic and electronic components used in post-processing and communication need to be just as quick. In addition, it should also be possible to embed the generator into small devices.

According to Jun Zhang, a physicist at Hefei and a co-author of the research published in Applied Physics Letters, the fabrication process developed by the Hefei-Hangzhou team solves this miniaturization challenge. Not only did the researchers achieve a record-high speed with their generator, they also managed to squeeze most of the critical components into an area of 15 mm2, which is roughly one tenth the size of a micro-SIM card. While the current prototype chip still needs an external laser source and amplifiers, making the total package significantly larger, Zhang says the team intends to develop a low-cost version, albeit with a slower generation rate, for commercial use. If successful, such a chip would make true random numbers affordable enough for everyday laptops and smartphones.

Proton cooled using an ion cloud and superconducting circuit

The ability to trap charged particles and cool them down to a fraction of a degree above absolute zero is key to many tests of fundamental physics, including probes of any asymmmetry between matter and antimatter. To reach lower temperatures more quickly, physicists in Germany and Japan have now shown how to extract heat from a single proton via a superconducting circuit connected to a cloud of laser-cooled ions several centimetres away – a technique, they say, that could easily be applied to antiprotons.

The Standard Model of particle physics tells us that all physical processes must obey CPT symmetry, leaving them unchanged when charge, parity and time are all reversed. Violation of this symmetry would require an overhaul of the Standard Model and might also explain why the universe appears to be made up almost entirely of matter even though equal quantities of matter and antimatter ought to have been created in the Big Bang.

The Baryon Antibaryon Symmetry Experiment (BASE) is one of several experiments at the CERN in Switzerland that studies antimatter using antiprotons delivered by the lab’s Antiproton Decelerator. Its specific goal is to look for evidence of CPT violation by comparing the magnetic moments of protons and antiprotons, having so far established that the two quantities are equal at the level of 1.5 parts in a billion.

Sympathetic cooling

In the latest work, members of the BASE collaboration have demonstrated a new technique for cooling protons and antiprotons that should allow them to make even more precise comparisons of the particles’ magnetic moments. The technique involves the laser cooling of beryllium ions, a process that relies on light absorption to chill ions down to just a few thousandths of a degree above absolute zero. This cooling cannot be applied to protons and antiprotons directly as these subatomic particles lack electronic structure. Instead, the researchers exploit what is known as sympathetic cooling to chill protons using the cold beryllium ions.

The idea is to cool one charged particle by bringing it in thermal contact with another charged particle at a lower temperature. This can be done with a positively charged ion – being progressively cooled by lasers – and a proton held using electric and magnetic fields in the same trap. The two particles repel each other and in the process the proton transfers heat to the ion, thereby cooling.

Since the ions used in laser cooling are usually formed by stripping otherwise neutral atoms of their electrons, the technique cannot be used to cool antiprotons – which have a positive charge – using a single trap. The novelty in the new research is to show how a subatomic particle can be sympathetically cooled even though it is in a different trap to the ions.

Superconducting LC circuit

BASE, led by Stefan Ulmer of RIKEN in Japan, demonstrated its approach in a laboratory at the University of Mainz in Germany – using protons rather than antiprotons for the moment. The experiment consists of two Penning ion traps some 9 cm apart, both of which are connected to a cryogenic inductor–capacitor (LC) circuit. A single proton is stored in one of the traps and a cloud of laser-cooled beryllium ions in the other. The resonant frequency of the superconducting LC circuit is set close to the axial frequencies of the traps, which makes it possible to transfer energy via the circuit from the proton to the ions.

As they report in a paper in Nature, Ulmer and colleagues used two experimental signals to show that the ions really did cool the proton. For one thing, they were able to work out the temperature of the two particle systems by analysing electrical noise present in the LC circuit. At the same time, they varied the resonance frequency of the ions while keeping that of the proton fixed. As expected, they observed cooling only when the ion frequency matched the frequencies of both the proton and the circuit.

Seconds, not hours

The team found that they could reduce the proton’s temperature by 85%, cooling it resistively to 17 K and then down to about 2.5 K. Ultimately, by improving coupling times and trap geometry in future experiments they hope to achieve temperatures of a few tens of millikelvin within just a few seconds. That compares favourably with the roughly 10 h it previously took them to cool antiprotons to about 100 mK, which they did by coupling a cooled superconducting resonator and then painstakingly siphoning off the particles with the lowest energies.

These lower temperatures, says Ulmer, should allow them to probe matter–antimatter symmetry “at much improved precision”. As to how precisely, he would not say. Neither will he be drawn on the precision that might realistically be needed to see potential CPT symmetry violation. That, he says, is a “purely speculative” question.

Writing a “News and views” article in Nature to accompany the research, Manas Mukherjee of the National University of Singapore says that the new technique raises the prospect of being able to cool any charged particle – at arbitrarily large distances – by “wiring it up” to laser-cooled ions. He adds that the scheme might also provide a better way of exchanging single bits of quantum information than using quantum states of emitted light. But he cautions that the rate of energy exchange between particle systems will first need to “greatly improve”.

Researchers and publishers respond to new UK open-access policy

The largest funding body in the UK has announced a new open-access policy that will come into effect on 1 April 2022. UK Research and Inno­vation (UKRI) – the umbrella group for the UK’s seven research coun­cils – will from that date mandate that all published papers written by researchers containing work carried out using UKRI cash must be free to read immediately upon publication. Yet the announcement has been met with concern by some publishers and researchers.

Open-access (OA) publishing has grown rapidly over the last two decades, especially in the UK, where 90% of articles published by researchers are now expected to be open access by the end of 2021. With an £8bn budget, UKRI is a supporter of the Europe-wide “Plan S” open-access initiative, which was unveiled in 2018 by 11 national research fund­ing organizations, including UKRI. They say that all scientific publica­tions resulting from research funded by them must be published in “com­pliant” open-access journals or on open-access platforms.

Traditionally, scholarly publish­ing has been free for authors, with publishers charging libraries journal subscription fees. The new UKRI policy mirrors Plan S’s aims, stat­ing it will support researchers to publish either in “gold” open-access journals or in “hybrid” journals, in which “transformative agreements” with publishers allow universities to pay lump-sum fees to cover both subscription costs and to publish open access.

If no such agreement exists, UKRI will not pay the arti­cle-processing charge (APC) that is required to make the paper open access. Authors could publish in a traditional, non-open-access sub­scription journal but the author must then self-archive the accepted manu­script in an open repository under no embargo. Previous UKRI policy had permitted a six or 12-month embargo period before the paper could be submitted to such a repository.

IOP Publishing, which publishes Physics World, broadly welcomes the UKRI’s new open-access policy, which it says “aligns with our mis­sion to expand physics globally”. However, it thinks that the require­ment for researchers to deposit the final version of a manuscript in a repository under no embargo will be “harmful to the significant OA pro­gress already made”. “This approach cannot form the basis for an econom­ically viable publishing model for physics journals seeking to maintain the highest standards of peer review and publication.”

That view is echoed by the Inter­national Association of Scientific, Technical, and Medical Publishers (STM), which says it is “deeply con­cerned” that the UKRI policy gives equivalent status to the “subscrip­tion-tied accepted manuscript and the full OA publication of the version of record”. This, the STM says, could “jeopardize the continued progress of the open-access publishing tran­sition by enabling an entirely unsus­tainable route”. The STM urges the UKRI board to “carefully consider these issues”.

John Harnad, director of the mathematical physics lab at the Centre de recherches mathématiques in Montreal, says that although the UKRI policy is well-intentioned, it fails to “provide a coherent strategy”, in particular when providing “any incentive or encouragement” for green open access. “UK scientific publishing does not operate in isolation, but as part of an international mix of authors, readers and subscribers,” he says, “Such a strategy seems solely to be a reaction to the continued rises in both subscription charges and article processing charges. But in fact, it would contribute nothing to counteracting the rising charges. And, given the international mix, it would not have sufficient impact to convince publishers of major journals to abandon their current policies and be ‘reborn’ as 100% OA journals.”

Meanwhile, the American Astronomical Society announced last week that from January 2022 all its journals will switch to being fully open access. Since 2017, the AAS journals, which are published by IOP Publishing, have provided a hybrid open access option, allowing authors the choice to publish their articles traditionally or open access. “We’ve seen that articles published open access in our journals are on average more widely cited than those that are paywalled,” says AAS chief publishing officer Julie Steffen. “The transition of all our journals from hybrid to fully open access in January will provide this same wide audience access to the entire cosmos.”

Q&A with IOP Publishing chief executive Antonia Seymour

How will the no-embargo aspect of the UK Research and Inno­vation’s new policy affect the move to OA?

Publication of articles costs money. Before any articles are even accepted for publication, journal editorial teams engage with their research communities to develop an appropriate scientific scope and direction for a journal, and to maintain an active, engaged and informed editorial board and network of peer reviewers. The editorial teams coordinate efficient and rigorous peer review in accordance with the latest standards for publication ethics and research integrity and use ever-evolving online editorial systems. These activities require trained, professional staff supported by increasingly complex technologies, as well as the necessary management, legal, financial and administrative overheads. It is therefore not economically viable for high-quality physics journals to provide entirely free and unrestricted distribution of accepted manuscripts, without any alternative and sustainable means of funding peer review and publication costs. Given the demonstrable progress towards full open access in the UK, I, along with many others have questioned the need to introduce this controversial component of the policy given it will undermine both subscription and open access business models. Why pay to subscribe to an article or to make it open access when a freely available substitute already exists?

What impact could the new policy have on researchers themselves – will they, for example, have sufficient funds to pay for it?

There’s a high risk of author confusion in trying to navigate the requirements imposed upon them by their funder and squaring those with what publishers offer. UKRI’s policy excludes the publication of funded research in journals where there is no transitional/transformative arrangement in place and where the publisher cannot support zero embargo publication of accepted manuscripts. That limits where authors who don’t have funds to pay the APC charges themselves can publish. IOP Publishing has a transitional agreement in place with JISC, but not all specialist and learned society publishers do.

What next steps will IOP Publishing be taking to respond to the new policy?

Learned societies, like the Institute of Physics, exist to ensure that physics delivers on its exceptional potential to benefit society. We recognise the important role of universal access to knowledge in achieving this goal and are therefore committed to making open access to physics research a reality. We welcome the increased policy momentum towards open publishing practices and are turning our attention to ensuring an effective implementation of the new UKRI policy – one that maximises funding for full open access publication of the “version of record” (gold), that will preserve the necessary choice physicists have in where to publish, and maintains the rigorous publication standards upon which researchers rely.

Standing on the shoulders of programmers: the power of free and open-source software

Twenty-three thousand. According to computer scientist Katie Bouman, that is how many people were involved in creating the first ever image of a black hole, taken by the Event Horizon Telescope (EHT) in 2019. Not all of these contributors are formally members of the EHT collaboration (whose numbers are in the hundreds) – the vast majority are those who write, maintain and support the free and open-source software tools that the researchers used in their work.

Bouman became the face of the EHT, after a photo of her delighted grin at seeing her work in action went viral. Code that she had written was part of the imaging software pipeline that extracted that famous photograph. But to Bouman, her contribution was only possible courtesy of software that was shared openly. “We would be getting nowhere if we didn’t have these kinds of tools that other people in the community have built up and have made free to use,” she told Physics World from her home in California, US. “We’re very, very thankful for everything that other people have done.”

Across the Atlantic, Suchita Kulkarni, a particle physicist at the University of Graz in Austria, who specializes in phenomenology of dark matter, agrees. “The reason phenomenology works, the reason people can look at exciting things at the Large Hadron Collider,” she says, “is because we have open-source software that is freely available under creative licences.”

Free and open-source software (FOSS) allows users to inspect the code, modify it and redistribute it with few or no restrictions. The “free” in FOSS thus refers to these freedoms, not to monetary cost. This makes FOSS particularly powerful in research, enabling collaboration between scientists working on code that they have modified, and today it is seen as an integral part of the wider open-science movement.

Code contributor

The term “software” can refer either to general user-facing tools and applications or to programs and analysis code used for specific tasks. Seen through this prism, there are at least four kinds of people who contribute to open-source software, with each group having its own motivations and reasons for doing so. First, you have the authors and maintainers – both paid and volunteer – of general-purpose software tools, including programming languages such as Python, Julia and R. Then there are those who write specialized software for certain domains of research, such as the Astropy library for astrophysics, or ROOT and Pythia for particle physics.

Third, you have scientists who write analysis code for their research that they then share openly, enabling others to learn from their work and apply the same code to different analyses. And finally, you have hobbyists, who contribute to open-source code on a one-off or regular basis, to develop their skills or to help maintainers of software they use. The boundaries between these are often fluid, and many maintainers of important software tools started out as hobbyist contributors.

Switzerland-based particle physicist turned software engineer Tim Head began using the open-source Python language as a physics student. According to him, a large part of the appeal of open-source tools is that they are usually distributed at no cost to the user. “The one thing everybody agrees with is that the easier it is for people to start using your software the better. With free and open-source software, you can just start using it today. You don’t have to ask your boss and then the purchasing department and then wait six months for them to negotiate a deal.”

After learning to programme in Python, Head began to use a greater number of open-source tools during his PhD, including “scikit-learn”, a Python library for machine-learning. Following a Twitter exchange with one of the developers of scikit-learn, he attended a workshop in Paris, aiming to make a contribution of his own. “I spent all afternoon fixing two typos in the documentation,” he laughs. But it sparked something in him.

Head got together with his fellow junior researchers on the LHCb collaboration at CERN to set up an open training programme to acquaint new members of the collaboration with some of the internal software tools used for performing physics analyses. Over time, he got involved with Mozilla Open Leadership, a mentorship programme for leading open-source projects, and started open-source projects of his own. Today, Head works as a senior engineer at Skribble, a digital signature platform. He is also a Distinguished Contributor to Project Jupyter, which provides some of the most popular tools for data analysis across domains of research.

How to make your next paper reproducible

Jupyter notebook

A reproducible paper is one in which a reader or a reviewer can remake every table and figure, from your data sets, using software on their own computer or using web-based tools. If you’d like to use a Jupyter notebook to write your next paper and make the entire document reproducible, here’s how to do it:

1. First, check to make sure the programming language you’ve chosen for your analysis is among those languages supported by Jupyter.

2. Create and launch a new Jupyter notebook.

3. Enter your text in a “text cell”. You can also use LaTeX for any equations you want to include.

4. Import your data in a “code cell”. You can store the data in the same folder as your notebook, or fetch data that was uploaded to CERN’s Zenodo open-source platform.

5. Instead of preparing your tables and figures separately and placing them in your paper, include the code for each in a “code cell”. Then, “run” each code cell to generate the table or figure.

6. When your paper is ready, upload the Jupyter notebook to a code-sharing platform like GitHub. Optionally, upload the file to Zenodo.

7. Follow the instructions for Binder to allow your paper to be recreated in the cloud.

8. Share the Binder link to your reproducible paper.

mybinder.org/v2/gh/RaoOfPhysics/reproducible-paper/main?filepath=index.ipynb

Open for the sake of openness

Jupyter notebooks are the most well-known product of Project Jupyter. They are interactive computational tools enabling “literate programming”, a concept in which one records descriptive prose in a human language, together with related snippets of code that perform bite-sized tasks. You can mix your narrative – what you did and why, what steps you took that led to dead-ends – with the actual code you used, along with the tables and plots it generated. You can then share the notebook with a colleague – or indeed with the wider research community. As long as they have sufficient computing resources and the same freely available open-source libraries installed, they can reproduce your research at the click of a button or change the parameters you chose and run a modified analysis (see box, above).

While it is one thing to share your analysis code with your collaborators, making your code available publicly is a different matter. After all, many scientists who write code do so without professional-level training in the best practices of software development. Some may therefore be reluctant to put their work out there for experts to scrutinize and criticize.

Andrew Chael and Katie Bouman

That’s not just an abstract concern, as Bouman found to her cost. The EHT team released its imaging tools when it announced its observation of the black hole. After Bouman’s photo went viral, Internet trolls used this openness – which included information about individual contributions to the collaboration’s analysis – to disparage her input. Despite being a highly qualified engineer and computer scientist, Bouman found herself on the receiving end of harsh, often misogynistic comments about her coding skills. “It’s nerve-racking to put your code out there,” she says, “especially when people are inspecting it and going down deep into things that you did years and years ago.”

So what made the EHT researchers want to share their work so openly? “I think the main goals were transparency, reproducibility and having other people be able to test our methods and code,” Bouman remarks. “We wanted to be as transparent as possible with such an important result. We wanted our ideas to be improved on and we wanted people to use them in different applications.”

I think the main goals were transparency, reproducibility and having other people be able to test our methods and code. We wanted to be as transparent as possible with such an important result

Katie Bouman

Kirstie Whitaker, director of the Tools, Practices and Systems research programme at the Alan Turing Institute in London, UK, repeats the refrain from the “Public Money, Public Code” campaign. “Let go of the idea that you’re giving away this knowledge for free, because actually you’re giving away knowledge that was in many cases paid for by taxpayers,” she says. “It’s not proprietary work, its’s work that should belong to the people who paid for it.”

Infrastructure as commons

The notion of sharing extends beyond software itself – indeed, it also embraces the services used to support the writing, dissemination and preservation of code-based tools. One such service is Zenodo, an open-source platform developed at CERN. “Zenodo is a platform to which anybody around the world can upload something that they consider a research object worth sharing,” explains Tim Smith, who leads the group in CERN’s IT department that is responsible for user-facing services. These objects could be a figure, a presentation, some data or even code. “The platform publishes it, issues a DOI [digital object identifier] with a guarantee to keep it forever, and makes it available for download to anybody who wants it.” With services like Zenodo, researchers can upload every version of their software libraries or analysis code, knowing that it will be available at the end of a DOI for the foreseeable future.

Another service that is gaining popularity is Binder, run by Project Jupyter. Binder allows you to take a static Jupyter notebook stored on the web, and make it interactive, without having to install any software on your computer. Everything runs in a web browser. In fact, you can even point Binder to properly configured Jupyter notebooks stored on Zenodo. Shortly after announcing the observation of gravitational waves, the LIGO team shared a Jupyter notebook containing a simplified version of their analysis. With a single click, anyone can, in principle, rerun the LIGO analysis in their browser (bit.ly/3iiqso3).

While Zenodo relies on the big-data storage facilities at CERN, Binder requires considerable cloud computing to function. Private cloud providers, such as Google Cloud Platform in the US and OVH in Europe, are the major donors of cloud services to Binder. Increasingly, however, research institutions such as the Alan Turing Institute have joined the fray, by sharing their own cloud infrastructure for Binder. “These institutes use the Binder software internally and also run a public instance of it,” Head explains. “You have to have benefactors like that. The more diverse and smaller the individual contributions are, the more resilient the project is.”

For Whitaker, who established the relationship between Turing and the Binder team, this desire to contribute internal infrastructure to external projects goes back to the idea of the commons. “If everyone plays a small part, you really can do huge and amazing things,” she says. “Where institutions like Turing come in is we just have to do it. By one person making it happen, it makes it more likely for others to do so.”

Citations as currency

Until recently, the role of code and those who write it has not been recognized within academic circles. Kulkarni, who both benefits from open-source software and contributes to it herself, is quite explicit about it. “Traditionally,” she says, “there have been very few positions in academia that have been given to software developers. It’s my opinion that we should have done a bit more to acknowledge the work of tool developers.”

Whitaker offers a simple solution to the problem of acknowledgement. “Cite the freaking software! If everybody recognized all of the shoulders that they were standing upon every time they were doing any part of their work, and they saw what has traditionally been invisible labour, we’d get there immediately.”

A paper published in the Journal of Open Source Software

Citing software is a relatively new idea and Whitaker acknowledges that authors of software must make it easy for their work to be cited. One way for software developers to do so, is to upload a given version of the code to a repository like Zenodo, and then display the DOI that is assigned to it. But scientists may wish to cite work that is peer-reviewed in some form. The Journal of Open Source Software (JOSS) was established precisely to address this, and conducts a fully open and transparent peer review of research software tools.

“JOSS is a hack on the system,” says Juanjo Bazán, an astrophysicist from the Center for Energy, Environmental and Technological Research in Madrid, Spain. “The founders of the journal noticed that in the research world the role of software developers is not really covered by academia’s credit system; they are not credited as authors of papers.” Anyone who writes a software library that is used in research can send a one- or two-page submission to JOSS, which peer reviews the quality of the software, determines whether it is well written and checks any performance claims made.

The entire process from submission to publication in JOSS happens publicly on GitHub, a web platform for sharing code and collaborating on its development. Submissions that are considered legitimate software papers are not consigned to the discard pile easily. Reviewers, who are not anonymous, openly leave their feedback to the software authors, with the objective of getting the authors to incorporate the suggestions and improve the software. Bazán, who is also a research software developer, was so enamoured by JOSS after submitting a paper to it that he now volunteers as an editor for astrophysics submissions. “JOSS has become popular among research developers,” he says. “We have just published our 1000th paper.”

Selective recognition

Tim Head does not believe that citations of software alone will solve the problem. He notes, first, that scientific software libraries are themselves often built upon existing tools or lower-level libraries – it’s software turtles all the way down. This raises the question of how many levels deep one must go. But on a more practical note, he remarks that having citations for a software paper does not mean recognition – and, more importantly, career progression – will be forthcoming. “A friend of mine is a core contributor to scikit-learn,” he says. “The team wrote a paper about it and it has a very large number of citations. But when he applied for some tenure-track jobs, he was told they would ignore this paper because it’s not ‘real’.”

Smith concurs that flaws in the system persist. “We have a reward system that is based on essentially a 300-year-old process. We have to change that reward system and get it to understand and acknowledge contributions in a digital age.” Those contributions, he explains, are diverse and encompass more than just the analysis process. “Many of our universities and research institutes are already changing their processes to start acknowledging these other contributions and how it all contributes to the advancement of science. All of those contributions should be valued.”

Three artist renderings of the Ingenuity Mars helicopter

Indeed, the diversity of contributors to open-source software may mean that people seek diverse forms of recognition. Bazán notes that for hobbyists, a simple acknowledgement can be enough. “In April,” he explains, “GitHub talked with NASA and asked the agency to list all the libraries used for the code of the Ingenuity Mars helicopter. Anyone who made a contribution to those libraries now has a small Mars helicopter badge on their GitHub profile page. Maybe that’s enough for some.”

On the other hand, some forms of selective recognition within academia have issues of their own. Take the example of two software developers of the Pythia and Herwig Monte Carlo simulators used in particle physics, who received an award this year from the European Physical Society for their contributions on the software side. Although she appreciates the value in this, Kulkarni is nevertheless critical of what she believes to be the romanticized version of physics as a lone-wolf field, because the award recognized only two people out of dozens of developers.

“You want to have a leader,” Kulkarni says. “One person who is going to advance the field. You see it when people write recommendation letters; they don’t say someone is a great team member, they say they are a great leader. But we have a contradiction between recognizing the value of team work and benefiting from the value of team work.”

Fortunately, a cultural shift is in the making. Research is moving towards greater openness and transparency. With more and more researchers like Bouman, publicly acknowledging and thanking the developers of open-source software, the day may not be far off when software receives both recognition and increased institutional support for its foundational role in modern science.

Fluid dynamics study could make medical inhalers more effective

Researchers in India and Australia have simulated the delivery of drugs used to treat pulmonary illnesses. Using a replica of the respiratory system, combined with fluid dynamics simulations, a team led by Suvash Saha at the University of Technology Sydney showed how smaller drug particles tend to reach smaller bronchi in the lungs more easily. The discovery could provide valuable guidance for clinicians in improving designs of both drugs and inhalers.

With levels of air pollution increasing in many cities – particularly in lower-income countries, lung diseases are a growing concern worldwide. Currently, among the most widely used methods to manage these diseases is through dry powder inhalers (DPIs): which disperse microscopic drug particles throughout a user’s lungs as they inhale through the device.

DPIs are particularly useful because they do not require a propellant, deliver more consistent doses, and enable a more widespread deposition of drugs within the bronchi of the lungs than other treatment devices. Yet despite these advantages, fewer than 30% of the drug particles delivered by DPIs actually settle within the lungs. Such a low efficiency is driving up the costs of drug doses, making treatments less accessible to the many millions of people who could benefit from inhalers.

Computational fluid dynamics

In previous studies, researchers have shown how this efficiency can be affected by the sizes of drug particles within the doses delivered by DPIs. To investigate this effect, Saha’s team developed a replica of the human respiratory tract, based on 3D images taken by computer tomography. By combining their model with computational fluid dynamics, they then simulated the deposition of different-sized drug particles onto different parts of the respiratory tract.

To recreate variations in human breathing, the researchers measured how depositions of the particles varied with different inhalation rates. They discovered that larger particles, as well as those those inhaled at faster speeds, were more likely to be deposited in the mouth. Since these particles carry higher inertial forces, they could less readily change direction at the sharp turn from the mouth into the trachea – causing more of them to settle before reaching the lungs.

In contrast, finer particles could more easily disperse into the bronchi: the passages in the lungs which branch out into ever finer structures, where gas exchange occurs. In addition, the simulations revealed that more particles were deposited in the bronchi of the right lung than the left – whose shape is distorted to accommodate the heart.

Based on their results, Saha and colleagues now propose that existing treatments for pulmonary diseases could improve through the use of smaller drug particles, which could more readily access the bronchi where breathing problems arise. They hope that their discoveries will help clinicians to design more effective drugs, and better devices for administering them.

The research is described in Physics of Fluids.

Physics for better swimming and judo, solar-flare radiation risk on aircraft

The Paralympic Games in Tokyo will be wrapping up this weekend and to honour the hosts, this edition of the Red Folder is focussing on Japan.

World-class swimmers must work hard for even the smallest advantage in their sport. One physical reality that they are up against is that the resistive force pushing them back in the water is proportional to the cube of their swimming speed – which means that speeding up costs a lot of energy.

Swimmers increase their velocity by boosting their stroke frequency, but now researchers at the Faculty of Health and Sport Sciences at the University of Tsukuba have found that there is a maximum stroke frequency beyond which swimming speed for the front crawl is not increased.

Conflicting evidence

According to Tsukuba’s Hideki Takagi, this limit is “due to a change in the angle of attack of the hand that reduces its propulsive force”. They also found that the balance of forces at the hand were different at different swimming speeds – suggesting that different techniques could be optimal for short and long-distance swimming. Their study also found conflicting evidence for whether increased kicking frequency boosts speed – saying that there is much more work to be done to fully understand the subtleties of the front crawl.

The research is described in Sports Mechanics.

Elsewhere at the University of Tsukuba, Shinichi Yamagiwa and colleagues have analysed video of judo throws from top-flight matches to try to determine the factors that contribute to good technique. The goals of the study were to improve the understanding of the biomechanics of judo and to improve coaching and training techniques. They report their findings in Sensors.

Soon, the world’s Paralympians will be packing up their medals and flying home. But should they be worried about radiation from solar flares that they could be exposed to when flying?

Ground level enhancements

A research team led by Yosuke Yamashiki at Kyoto University has addressed that question by looking at radiation doses experienced in aeroplanes flying eight different routes during five events called “ground level enhancements” (GLEs). These are periods of increased cosmic ray intensity measured at ground level and are normally associated with solar flares.

Yamashiki and colleagues found that there were increases in the detection of solar energetic particles (SEPs) on board the aircraft. However, they found that radiation levels were not high enough to justify current countermeasures such as flying at a lower altitude where radiation levels are lower during a GLE.

“There is no denying the potentially debilitating effects of radiation exposure,”says Yamashiki, “but the data suggest that current measures may be over-compensating for the actual risks”.

The study is described in Science Advances.

Copyright © 2026 by IOP Publishing Ltd and individual contributors