Skip to main content

Once a physicist: Eline van der Velden

Eline van der Velden

What sparked your initial interest in physics?

I’ve always been interested in nature and why things are the way they are. My mother told me that I used to perform experiments as a child, endlessly letting a toy car come down a ramp and studying it as it crashed over and over again, never tiring of seeing the effect of gravity, for hours on end. I attended a performing arts school, ArtsEd Tring, during my A-levels, and decided to pursue physics and maths A-level alongside the musical-theatre course. Fortunately, our maths and physics teachers were both female, and very inspiring, so it never crossed my mind that they may not be “cool” or “girly” subjects to study. I thought it was completely normal until I counted the number of girls in the lecture theatre at university in first year.

You did a Master’s degree in physics at Imperial College – what was it that you studied specifically?

I always thought it was important to contribute something to the world. Considering energy is currently finite, I hoped to help in the field of nuclear fusion. I did the MSci course at Imperial, which includes a year in Europe – my project was on the statistical properties of ion flows in a toroidal plasma and I spent my third year at the École polytechnique fédérale de Lausanne (EPFL).

Did you ever consider a permanent academic career in physics?

I did, but I also had a strong desire to fulfill my performing and writing ambitions as well. After graduating I went on to act professionally while supporting myself financially by tutoring in maths and physics, an ideal combination really. Alongside acting in a few TV series, I made a lot of silly comedy videos online, but after a good decade of acting and comedy, I’m being pulled back towards scientific content. There is still part of me that misses the academic side of things. The perfect balance for me is writing and creating content that includes science.

How did your interest in comedy and acting emerge?

From a young age I was writing and performing my own plays, and singing and dancing for my parents’ friends at every opportunity. Not much has changed there. I like to do both my straight comedy, like the BBC Three iPlayer series Miss Holland, as well as my online comedy-with-science series Putting It Out There. With both comedy and science, you’re always trying to educate people. And in the end education is everything. It’s the best thing to empower people. The production company I started, Particle6 Productions, aims to educate and entertain with every video.

What was it like moving from academia to writing, directing and producing TV shows?

I was living in Los Angeles when I first started writing, directing and producing. I did a lot of improv at the UCB theatre, which really helped develop my writing and creativity for making great content. However, when I moved back to the UK, it took a long time to get started. I had no contacts or friends in the TV industry, and everything is about who you know. Very few people from Imperial end up working in the media, so my alumni network didn’t help much. Over the years you make your connections and people start to know your work, but until then it’s all about knocking down doors. It’s amazing to see Alex Mahon at the helm of Channel4 right now. It’s always great to see a physicist in a position like that in media.

What projects and shows are you working on now, both as a producer and as an actor?

I always have at least 10 projects on the go. Very few come off each year. Miss Holland took five years before it got commissioned. It’s always good to keep some projects in your back pocket for when the time is right. I’m currently working more in the science–comedy space again, and hope to make some very cool science series in the coming years, both scripted and non-scripted. My passion for science hasn’t gone away, I’m just pursuing it in a different medium.

How has your physics background been helpful in your work, if at all?

I am continuously underestimated by people, and the physics degree is like a golden ticket. Once people know I studied physics it’s like they see me as a completely different person. This is why I often encourage women to study science. I get taken seriously. It also gives me permission to make content about science, which I love.

Any advice for today’s students?

I’ve only recently delved into the philosophy of science. I wish I had done that earlier. Getting a physics degree was the best decision I ever made, and I would do it again in a heartbeat.

Fast photon source lights up quantum technologies

Researchers at the University of Sheffield in the UK have built a nanoscale chip that can emit rapid pulses of single, mostly indistinguishable photons. The research team, led by Feng Liu, exploited the physics behind the Purcell effect to design the system, helping them to reduce losses and achieve increased photon production rates.

Physicists have been keen to develop on-chip sources of single photons with indistinguishable quantum states for several applications, such as secure data transmission and photonic quantum technologies. However, previous designs have suffered from high losses of single photons, mainly due to imperfect geometries in the chips. Currently, the most advanced technologies can only efficiently create pulses containing no more than three to five photons.

To solve the issue, Liu’s team made use of the Purcell effect, which describes how the spontaneous emission rates of quantum systems can be enhanced by their surrounding environments. In their design, the necessary conditions are created by incorporating a quantum dot – just a few atoms of a semiconducting material – into the resonant cavity of a larger photonic crystal. When a rapid laser pulse is fired at the dot, one of its electrons becomes excited and then releases a single photon as it relaxes back into its ground state. The photons created in this process resonate inside the cavity, before being emitted in rapid succession.

Liu and colleagues coupled the quantum dot to a waveguide that funnelled the emitted photons away from the cavity, ensuring they did not interfere with the laser pulse. The technique enabled the cavity to produce one photon every 22.7 ps – around 50 times faster than would be achievable without the Purcell effect. This may not be the fastest photon production rate yet developed but, unlike previous systems, more than 90% of the photons remained indistinguishable from each other on sufficiently long timescales for 20 photons to be emitted.

Using the insights gathered by Liu and colleagues, chips containing rapid single-photon sources could soon be used in a variety of applications. Since a single photon cannot be interfered with without alerting its sender, such chips would be highly desirable for government or security organizations wishing to transmit large amounts of data confidentially. They would also be advantageous in photonic quantum technologies, with applications including boson sampling, and improving the sensitivity of interferometers.

The research is described in Nature Nanotechnology.

CT-based biomarkers quantify radiation-induced lung damage

Radiation therapy of lung cancer can cause toxic side-effects to healthy lung tissue, known as radiation-induced lung damage (RILD). RILD can significantly impact patient quality-of-life post-treatment, but with historically poor prognoses of lung cancer, long-term RILD reporting is piecemeal, with only a few subjective and unreliable scoring systems available. Recently, however, improvements in lung cancer survival have triggered interest in more rigorous RILD assessment.

To establish a standardized methodology for RILD scoring, medical physicists and engineers from University College London (UCL) teamed up with clinical oncologists and thoracic radiologists from Guy’s and St. Thomas’s NHS Trust and Royal Brompton Hospital.

The multidisciplinary team analysed CT scans of cancer patients before and after treatment, and developed software to semi-automate quantification of 12 identified biomarkers of RILD. This is the first time that RILD has been quantified using a broad spectrum of radiological findings (Int. J. Radiat. Oncol. Biol. Phys. 10 1016) .

“As we start to get long-term survival, looking at the lasting damage to the patients becomes more important,” says first author Catarina Veiga, research associate at UCL. “We wanted to develop methods to quantify this in a very objective manner.”

Spot the difference

In the first stage of the study, clinicians carefully examined CT scans of 27 non-small cell lung cancer patients who had taken part in a phase I/II clinical trial of isotoxic chemoradiation. By aligning the baseline images with those taken 12 months post-treatment, they identified common changes.

“All patients at 12 months had some type of lung damage, but the degree of change could be different,” explains Veiga. “Some just have a bit of lung volume loss, while others have extensive parenchymal damage, pleural effusion and anatomical distortions.”

Previous to this study, the main RILD marker recognised was parenchymal changes – scarring or inflammation that can be easily seen on CT imaging as “bright” regions within the lung. Here, the clinicians identified three main categories of RILD – parenchymal, pleural and anatomical.

In the pleura – the area between lungs and chest – inflammatory changes were observed. And the clinicians noted that as treatment scars shrunk, they distorted the lung’s anatomy, so reducing lung volume. From these observations, the team settled on 12 measurable biomarkers of RILD.

Automating for objectivity

At this point, the engineers stepped in to quantify the biomarkers and establish an image analysis pipeline in a modular and semi-automated fashion within MATLAB. The pipeline was validated by comparing results to the original visual observations.

Of the 12 biomarkers, 10 showed a significant change, and statistical analysis highlighted that biomarkers were independent of one another. This made the authors confident that they have a system that can accurately report RILD, and they plan to make it fully automated and freely available to the community in the future.

“By automating all of this, we can speedily process the results from trials and clinical practice, and have evidence of how the different treatments we are using may or may not impact the long-term outcome,” says Veiga.

Technical challenges

The scientists note that there are some technical limitations to their methodology. Imaging protocols are of key importance, with inconsistency easily introduced between scanning time points. For example, variations in a patient’s breathing during imaging could influence biomarker measurement.

“For this particular clinical trial, the majority of patients had consistent imaging, but in routine clinical practice it is likely that this will become more of a challenge,” says Veiga.

Another technical consideration that the team is trying to address is the use of manual segmentation methods. Although most stages of the pipeline are automated, images were sometimes manually segmented to calculate the biomarkers. This is a laborious process that could introduce subjectivity to the results, and so Veiga and colleagues are currently running a pilot study that fully automates segmentation.

Improving life quality for survivors

“Once we are able to fully automate the system then we want to start looking at how the damage evolves over time,” notes Veiga, who is planning to examine the three, six and 24 month images that they have from the same clinical trial. “We are also interested in looking at the relationship between the identified damage, radiation dose delivered and clinical outcomes.”

Ultimately, Veiga hopes that the multidisciplinary team has created a tool that can improve clinical practice and be used in future trials to find the critical radiation dosage that clears tumour cells with minimum toxicity.

Organic ferroelectrics finally stick in the memory

Inorganic ferroelectrics have promised to change the face of semiconductor electronics for almost a century, but high processing costs have so far limited development. Now, researchers at Southeast University in Nanjing, China, have paved the way for progress by fabricating the first metal-free perovskite crystals. They present a set of materials that can achieve the performance of inorganic ferroelectrics but with the versatility, low-cost and low-toxicity inherent in organics.

To induce the directional switching of polarization characteristic of ferroelectricity, a material must contain a spontaneous dipole that can respond to an electric field. In other words, the centres of positive and negative charge within a crystal must be different. For metal-free perovskites, this should theoretically happen when a highly symmetric non-ferroelectric state is ‘frozen’ into a state with polar symmetry.

The MDABCO molecule (bottom-left) and the perovskite structure with the central “A” site highlighted. Credit: Yu-Meng You and Xiong Ren-Gen

From database to device

With this in mind, Ren-Gen Xiong and Yu-Meng You instructed their students to scour the hundreds of thousands of entries in the Cambridge Structural Database for molecules of suitable size and symmetry. Such candidates could then be incorporated into the traditionally metallic “A” site of the perovskite structure, yielding an all-organic perovskite ferroelectric.

The result of their efforts is the discovery of 23 metal-free perovskites including MDABCO-NH4I3 (MDABCO is N-methyl-N’-diazabicyclo[2.2.2]octonium). This particular crystal displayed a spontaneous polarization of 22 microcoulombs per centimetre square, close to that of the state-of-the-art perovskite ferroelectric, BaTiO3 (BTO). In addition, crystals can be formed readily at room temperature, avoiding the excessive heat (>1000 oC) required to make inorganic ferroelectrics. This will lower fabrication costs and open the door for more delicate device applications such as flexible devices, soft robotics and biomedical devices.

The MDABCO molecule is crucial to the large spontaneous polarization that the researchers observed. At high temperatures, excessive thermal energy leaves the MDABCO molecule in a state of free rotation within the crystal. Here, the average centres of positive and negative charge at the molecule site are the same and ferroelectricity is forbidden. However, when cooled below the phase transition temperature of 448 K, the MDABCO molecule becomes locked in place revealing a significant dipole with eight possible polarization directions.

Beyond binary

Ferroelectric random access memory (Fe-RAM) works on the principle that individual cells are charged to states “0” and “1”, represented by different polarization directions of the active material.  As ferroelectric crystals tend to have two polarization states, we obtain the well-known binary system. The eight possible polarization directions in MDABCO-NH4I3 then, will pique the interest of those looking to make next-generation memory devices.

“In principle, eight polarization directions could be used to make an octonary device with eight different logic states”, explains Yu-Meng You. “This is a potential strategy for increasing the density of future RAM devices”. While You expresses concern over increased architectural complexity in such a device, the potential for cramming eight bits into a single cell could add to the commercial prospects of this set of materials.

Prospects for perovskites

But the opportunities for advancement don’t stop at memory applications. “We have demonstrated a new system of perovskites with compositional flexibility, adjustable functionalization and low toxicity. We expect the metal-free perovskite system will attract great attention in near future”.

Full details are reported in Science.

 

Reality of ‘pristine’ cloud forest revealed

The cloud forest of Ecuador today harbours more biodiversity than almost anywhere else in the world. But it has a long record of human exploitation, as a new study has detailed.

In the mid-19th century, Western visitors described the cloud forest as a region that “has remained unpeopled by the human race” harbouring “a dense forest, impenetrable save by trails”. The study shows, however, that this was more than 30 years after human alteration of the forest resumed; the explorers weren’t seeing unspoilt forest at all.

These results could feed into the modern conservation of this cloud forest and other at-risk habitats. Historical records of ecosystems before modern damage, often used as targets for conservation efforts, may instead show habitats already altered by centuries of human impact.

The study also revealed that although indigenous people in the past carried out more deforestation than we have today, the forest was able to recover in only around 130 years.

Nicholas Loughlin of the Open University, UK, and colleagues from Ecuador, the Netherlands and Spain analysed pollen records preserved in lake sediment for the past 700 years to discover the plant species history of the cloud forest in unprecedented detail. The researchers distinguished four distinct plant communities over time, indicating changing human land use.

Loughlin and the team took cores from the bottom of Lake Huila, more than 2.5 km above sea level. They examined over 2 metres of sediment, providing a radiocarbon-dated record back to around the year 1300. Pollen arrived at the site of the lake from the local area and was preserved with the sediment, giving an excellent window into the species making up past communities.

The first phase of plant life recorded in the sediment contains maize, cultivated by the local Quijos people, and other open space-loving plants, indicating significant deforestation by humans.

The year 1588, according to the radiocarbon date, saw an abrupt change in the pollen record, with maize disappearing and huge increases in grasses and forest plants. Agriculture had ceased and the forest had begun to encroach back. The lake sediment from this time also contains large amounts of burnt charcoal; this point coincides with the height of a rebellion by indigenous peoples against the Spanish conquistadors, who arrived around 1560.

Following the rebellion, the population declined catastrophically. The cloud forest continued to re-establish through this period and into the next. After 1718, grass pollen is almost absent from the core and the pollen record is dominated by the plants you’d expect to find in a cloud forest. This period is the one most like the forest before humans arrived some 40,000 years ago, demonstrating the ecosystem’s ability to recover to a near-natural state following the end of human impacts. Following 1819, the flora in the sediment changes again, with less forest pollen, more grasses and the appearance of fungi that live on dung; this shows the beginning of cattle farming and the resumption of deforestation.

Loughlin and colleagues published their findings in Nature Ecology and Evolution.

A quantum leap for industry

In the July edition of Physics World Stories, Andrew Glester looks at the latest developments in technologies based on quantum mechanics. While quantum computing often steals the headlines, there is a whole world of other quantum-based devices in the pipeline for a range of applications.

Glester speaks first with Raphael Clifford and Ashley Montanaro at the University of Bristol about quantum computing. They are interested in the prospects of achieving “quantum supremacy” – the point at which quantum computers can outperform classical computers at specific tasks.

Next, Glester hands the reigns over to Physics World’s Margaret Harris who recently attended the 2018 Photonics West conference in San Francisco. At that event, Harris caught up with Anke Lohmann, the director of ESP Central Ltd, which supports the transfer technology form academic settings to the marketplace. Lohmann gives her opinion on the quantum innovations most likely to have the most significant impacts in the coming years, among them is quantum key distribution for secure communication.

Finally, Glester heads to the University of Birmingham, the site of one of the UK Quantum Technology Hubs. He is given a tour of the lab by Kai Bongs who explains how the goal is to transform scientific concepts in practical applications that are economically viable. The focus at the Birmingham hub is on developing sensors and metrology techniques. Targeted applications include gravity-mapping beneath the Earth’s surface and highly precise optical clocks.

Co-ordinated action required to boost ‘open-science’ initiatives

Making research papers, data and methodologies freely accessible for anyone to read and use has gained significant ground in recent years, but several challenges remain to widespread implementation. That’s the conclusion of a new report into “open science” by the National Academies of Sciences, Engineering, and Medicine (NASEM), which calls on universities and publishers to issue new processes to improve how science can be freely accessed.

Open science aims to make research papers and data, as well as methodologies such as code or algorithms, freely available. NASEM’s report, Open Science by Design: Realizing a Vision for 21st Century Research, identifies several recent initiatives that have advanced open science – such as citizen-science projects – and highlights the increase in research funded by organizations that require the outputs to be open for everyone to read and use.

Automated search and analysis of open publications and data can make the research process more efficient and more effective

Alexa McCray

While the 190-page report notes that the use of open-access publishing and open data is the “norm” in some areas of physics, notably high-energy physics and astronomy, other areas lag behind. The report points out several issues stopping open science becoming more widespread, such as researchers refusing to share their data, as well as the high number of journals that are only available via subscription.

A critical point

To overcome such challenges, the report calls on universities to develop training progammes that focus on open science for their researchers. It also recommends that professional societies change their journal publication strategies from a subscription-based model to approaches that support open science.

“We are at a critical point where new information technology tools and services hold the potential to revolutionize scientific practice,” says Alexa McCray of Harvard Medical School, who chaired the 10-strong committee that produced the report. “Automated search and analysis of open publications and data can make the research process more efficient and more effective, working best within an open science ecosystem that spans the institutional, national, and disciplinary boundaries.”

The report has already gained some support, notably from the Texas Republican Lamar Smith, who heads the US House of Representatives science, space, and technology committee. He sees the report as confirming a rule that, if implemented, would require the Environmental Protection Agency to base its decisions only on information that is publicly available to scientists and the general public.

Yet critics of the move argue that such a rule prevents the use of confidential or proprietary information – particularly in medical research – in scientific decisions. “Environmental regulations, which are ultimately funded by taxpayers, should be based on open and replicable studies,” Smith noted in a statement.

Build it and they will have fun

Educational physics project

Bold, bright and easy to decipher, Build It! 25 Creative STEM Projects for Budding Engineers by award-winning educator, engineer and author Caroline Alliston is the perfect book for any burgeoning engineers, as well as their teachers and parents. The glossy book features 25 STEM projects for children, varying from constructing a marble maze to building a clock. The book’s main aims – to teach children to think scientifically and creatively, while also showing them the applications of science in the real world – are well achieved, thanks to the clear directions and explanations provided throughout. Alliston provides a list of tools and materials that most of the projects require, as well as a detailed section on how to put together a circuit board for some of the projects. While most of the materials should be pretty straightforward to source, the claim that they are “easy-to-find objects from around the home” is an overstatement – I don’t know about you, but I don’t have a spare toggle switch or 13 V motor lying around. Despite this, Alliston does provide a handy list of websites from where you can purchase the necessary electrical parts.

Putting aside this small grievance, Build It! is a great project guide. The projects cover the spectrum of physical forces and concepts including light, air, water and electricity, as you build models that fly, zip around the floor and light up. Each project has a specified difficulty level from one (the easiest) to three (the hardest), and includes a “How it works” box that explains the basic principles behind the build.

Another good feature of the book is that the projects vary a lot in terms of complexity, time required to make them and the skills necessary – not to mention the final product. For example, “the glider” (a more solid version of a paper plane) would be a quick project as it requires only printing out a template and pasting together sections from the polystyrene discs that come with supermarket pizza. The “motorized buggy” on the other hand, if executed perfectly, will give you a driverless vehicle that you can programme to move, turn and even park. With the detailed cut-outs and circuitry necessary, you could easily spend a few days working on this. For the same reasons, the book has projects that could be done with very young children as well as teenagers; just pick the right project. All in all, Build It! is an excellent companion for parents and teachers looking for fun and engrossing projects to keep young hands and minds busy.

  • QED Publishing, Quarto Kids £10.99hb 120pp

Jovian giant

Jupiter

Ask anyone what their favourite planet in our solar system is, and it’s usually a 50:50 split between Saturn and Jupiter (all jokes about Uranus notwithstanding). But there seems to be something particularly captivating about the biggest planet in our system, with its swirling surface, Giant Red Spot and battalion of satellites (67 to be precise). Having fascinated humans for millennia, it’s no surprise then, that new books on the fifth planet from the Sun keep popping up, despite the many already available tomes on the subject.

Jupiter by William Sheehan and Thomas Hockey is the latest book in the Kosmos series by publisher Reaktion Books. Sheehan is a psychiatrist by training, but has long been an amateur astronomer and historian, and has written many books on the subject. Hockey is professor of astronomy at the University of Northern Iowa. Although A5 in size, this book is a glossy coffee-table title, packed with more than 100 images and illustrations. The opening chapters do a good job in tackling the birth of the solar system and all the Jovian planets; describing how they formed; before delving into Jupiter itself, layer by layer, from atmosphere to core. The book does contain a substantial amount of historical background, both observational and theoretical, but this is interspersed throughout the text rather than being clumped into the start, which might otherwise have slowed down readers.

Apart from talking about very early observations of the planet, the book covers all the many missions and probes that have visited Jupiter from Pioneer onwards, slowing peeling back the layers and mysteries of our favourite gas giant. Sheehan and Hockey’s language is clear and mostly lacking in jargon, if occasionally effusive, and the book is well-paced, if a bit clogged with facts and figures. The final chapter, “Juno to Jupiter”, is particularly interesting as it details some of our most recent discoveries thanks to the NASA mission, ending the book on a good note. While not being revelatory, Jupiter is a useful and practical planetary-science primer.

  • Reaktion Books £25hb 192pp

Towards renewable life-support systems in space

The atmosphere on Mars doesn’t support human life, so if humanity is going to travel to – and survive – on the red planet, we’ll need to pack our space rockets with everything needed to support life. That will be a heavy load, so it’s no surprise that space agencies are looking to develop life-support technologies and renewable fuel production that are lightweight and regenerative.

This makes photoelectrochemical cells, which convert solar energy to produce oxygen while simultaneously releasing hydrogen for fuel, an attractive option. Now, solar-fuel researchers in California and Germany have demonstrated a semiconductor half-cell that, unlike conventional designs, efficiently releases hydrogen in microgravity conditions.

“We wanted to figure out if we can actually do photochemistry in microgravity and produce fuels,” says first author Katharina Brinkert from the California Institute of Technology. But testing and designing a cell that’s capable of working under zero-gravity conditions proved quite the challenge.

Dropping into microgravity

Experiments in zero-gravity aren’t easily conducted here on Earth. But at the Center of Applied Space Technology and Microgravity (ZARM) in Bremen, Germany, there is a specially designed drop tower that enables short experiments under weightless conditions.

Photo of the ZARM drop tower

In the experiments at ZARM, all the equipment must be installed into a single capsule that is launched up the 120 m tower by a pneumatic piston, reaching 168 km/h. When the capsule falls it experiences weightlessness for 9.3 seconds, and it’s in that fall-time that experiments must be conducted. “We weren’t sure the experiment would work because we only had 9.3 seconds of microgravity to perform difficult electrochemistry,” says Brinkert.

Brinkert says that the research team relied on the expertise of the scientists at ZARM to set up and automate the experiment. And to make it easier, the team chose to test the simplest half of the photoelectrochemical cell, where hydrogen is released at the photocathode.

The researchers already knew that the efficiency of the electrodes in traditional solar-fuel cells falls in microgravity. “In microgravity conditions you have an absence of buoyancy and that makes the gas bubbles produced stick to the electrode surface,” says Brinkert. These hydrogen bubbles gather together to coat electrodes in a “froth layer” that increases resistance and reduces current and voltage through the electrode.

As expected, the team’s traditionally designed, indium-phosphide photocathodes with flat rhodium deposits experienced up to a 70% reduction in circuit voltage, with froth formation evident on camera footage. To prevent the formation of the froth layer, the California-based solar-fuel scientists sought help from Michael Giersig at the Freie University in Berlin, who is an expert at creating nanostructured surfaces that can alter the properties of component materials. While nanostructuring catalysts is nothing new in the solar-fuel community, Giersig specializes in shadow nanosphere lithography (SNL), which has not previously been used with solar-fuel cells in microgravity.

The nanostructured fuel cells were composed of the same indium phosphide, but the rhodium was deposited using SNL. Employing latex spheres as a type of template, the scientists were able to form the rhodium into 3D hexagonal nanostructures.

Tests in the drop tower revealed that the nanostructured cells performed much better than the traditional cells in microgravity, with only a 25% drop in voltage. Experiments in terrestrial conditions yielded the same efficiencies between catalyst designs, confirming that the difference in voltage generated in microgravity was related to surface topology.

Images of bubble formation in microgravity combined with theoretical analyses helped the researchers to understand how nanostructuring improved performance. “Our theory is that the bubbles are produced along the tips (of the 3D structures) and that these tips are so small that this limits the gas bubble growth,” says Brinkert.

Completing the cell

The researcher’s findings suggest a new design principle that could help realize simple and lightweight life-support systems for future space travel. Shaowei Chen, a professor of electrochemistry at the University of California, Santa Cruz, also thinks this technique could improve the performance of terrestrial water splitting technologies.

Brinkert and colleagues are now keen to advance their studies to look at the other half of the cell, which releases oxygen for life support. “We’d like to have two half cells working together, splitting water at the photoanode and feeding electrons to the photocathode to reduce the species you want to produce for fuel,” she explains. “Ultimately we’d like to take such a device up to the international space station and do experiments there.”

The research is described in Nature Communications.

Copyright © 2025 by IOP Publishing Ltd and individual contributors