Skip to main content

Diamond films cool down electronics precisely where needed

A new technique for directly growing diamond layers in selected areas on technologically relevant substrates could help remove heat precisely where it is needed in electronic devices, improving their performance. The scalable technique, which relies on microwave plasma chemical vapour deposition, can create diamond patterns on silicon and gallium nitride across length scales ranging from microns to full 2-inch wafers.

Unwanted heat is a major problem in electronics, and the issue only gets worse as devices become smaller. Synthetic polycrystalline diamond could come into its own here, thanks to the material’s high thermal conductivity, which allows it to efficiently dissipate heat. The problem, however, is that diamond is very hard and chemically resistant. This makes it difficult to shape using the conventional “top-down” techniques employed to carve fully-grown diamond layers to the sizes required.

In the new work, a team of researchers led by materials scientists Xiang Zhang and Pulickel Ajayan and electrical and computer engineer Yuji Zhao of Rice University in the US turned to a bottom-up approach in which they build up diamond layer-by-layer using a plasma chemical vapour deposition technique. Their process, which is detailed in Applied Physics Letters, involves using microwave energy to ionize methane gas (CH4) so that it breaks down into its constituent carbon and hydrogen atoms. The carbon atoms then settle onto the substrate and assemble via a process that begins with nucleation. “Here, individual carbon atoms act as ‘seeds’ that other carbon atoms can latch on to,” explains Zhao.

Under these conditions, the researchers are able to control the thickness of the diamond by varying the growth time.

Controlling the seed location

To control the precise location of the carbon seeds, the team employed two techniques. The first was photolithography – a routine method in microelectronics that involves passing a light beam through a transmission mask to project an image of the mask’s light-absorption pattern onto a (usually silicon) wafer. The wafer itself is covered with a photosensitive polymer called a resist. Changing the intensity of the light leads to different exposure levels in the resist-covered material, making it possible to create small, finely detailed structures.

The approach, explains Zhao, is akin to using light to create a precise stencil, with the resulting structure acting as a mould for the diamond seeds. “Once the substrate wafers have been prepped, we spread a liquid containing nanodiamonds over their surface. These tiny specks then act as the starters for the diamond growth.”

The particle size of the nanodiamond seeds was 5–10 nm, which ensured a high nucleation density (estimated to be around 1011–1012 cm-2) for subsequent diamond growth, Zhao adds. High-magnification scanning electron microscopy revealed that the diamond films consisted of densely packed grains that were smaller than a micron and that the patterned diamond films were around 2.5–3.5 µm thick. Raman spectroscopy confirmed that a diamond film had formed across the entire patterned region and that it was highly crystalline.

To prove how versatile this approach was, the team decided to selectively fabricate complex geometries – for example, a diamond structure in the shape of an owl, which is the mascot of Rice University – on a gallium nitride substrate.

A different technique for larger wafers

This technique worked well for small-area patterns, but for larger wafers, a different approach was required, explains Zhao. Instead of conventional photoresist lithography, the team laminated a commercially available lapping film onto a silicon wafer that served as a removable masking layer. A standard laser cutter was then used to define the boundaries of the desired pattern by selectively cutting through the film.

Next, the engraved regions were peeled off, exposing the underlying substrate only in the predefined areas. “We then carried out nanodiamond seeding by spin-coating a nanodiamond suspension over the entire wafer,” says Zhao. “After solvent evaporation, we mechanically lifted off the remaining lapping film, removing the nanodiamond seeds from the masked regions to leave a patterned seed layer on the exposed substrate that diamond can then grow on.”

This approach allowed the researchers to scale up to a full 2-inch wafer.

“The key result is that we can grow diamond on selected, predefined areas on technologically relevant substrates,” Zhao tells Physics World. “This will allow diamond – the best bulk thermal conductor known – to be placed precisely where heat removal is needed in a device, making practical integration much more feasible. Indeed, we showed that our films when employed as heat spreaders on a silicon substrate can reduce the operating temperature by more than 23 °C compared to bare silicon.”

The team also discovered that smaller diamond islands were better at dissipating heat than a continuous diamond coating. “We found that the 50-micron diamond patterns achieved the most effective cooling because of their higher perimeter-to-area ratio,” Zhang explains. “These geometric features increase the density of the edge regions and help the heat dissipate more efficiently in three dimensions down into the silicon substrate.”

Thermal management is now a universal challenge – and is needed everywhere from AI GPUs and advanced logic (for example, FinFET technologies) to power electronics and photonics, Zhang adds. “As the global demand for AI accelerates, the associated power consumption and heat generation are becoming critical limits. Selective diamond integration offers a pathway to more efficient heat spreading across a broad range of technologies.”

Looking ahead, the researchers say they will now be working on direct device-level integration and making quantitative thermal measurements. They will also further optimize the material quality and interface engineering.

Superconductivity’s new contender

Researchers have experimentally observed a new kind of particle in transition‑metal dichalcogenide bilayers called doubly charged excitons, or quaternions. A single exciton is an electron bound to a hole, and combining an even number of fermions can create a boson with integer spin. In this system, one electron and three holes (or one hole and three electrons) bind together into a stable, doubly charged bosonic complex. Because bosons can occupy the same quantum state, these quaternions could in principle form a Bose-Einstein condensate, a collective phase in which all particles share a single macroscopic wavefunction. For charged bosons, such a condensate could carry electrical current with zero resistance, opening a pathway to a new kind of superconductivity.

The researchers confirmed the existence of quaternions through two key measurements. By continuously tuning the electron and hole densities, they observed the expected population behaviour of the bound state, and by applying magnetic fields, they identified the complex as a spin‑triplet. These signatures match theoretical predictions for a doubly charged exciton.

Unlike exciton or polariton condensates, a quaternion condensate is not expected to emit coherent light, and the experiments indeed show no signs of spectral narrowing or other coherence effects. Achieving condensation will require overcoming practical challenges, including heating from the optical pump and nonradiative Auger recombination at high densities, both of which raise the critical density for condensation. Better cooling and possible lateral confinement could help reach the required regime.

Although true Bose-Einstein condensation is not possible in an infinite two‑dimensional system, finite 2D systems can still undergo a transition that is effectively indistinguishable from condensation if the coherence length exceeds the system size. This makes it reasonable to search for superfluidity, and potentially superconductivity, in this platform. The strong long‑range Coulomb repulsion between quaternions also raises the possibility of entirely different quantum phases, such as a bosonic Wigner crystal or even a supersolid.

The establishment of these doubly charged exciton complexes in screened transition‑metal dichalcogenide bilayers opens a promising new direction in quantum materials research, with the real prospect of discovering a non‑BCS form of superconductivity (one that does not rely on the conventional Cooper‑pair mechanism) and other exotic states of matter.

Read the full article

Light-induced electron pairing in a bilayer structure

Qiaochu Wan et al 2026 Rep. Prog. Phys. 89 018003

Do you want to learn more about this topic?

Bose–Einstein condensation and indirect excitons: a review by Monique CombescotRoland Combescot and François Dubin (2017)

A single theory for complicated quantum systems

Open quantum systems appear in quantum computers, quantum magnets and spintronics, but their behaviour is extremely difficult to model. The environment introduces memory effects (non‑Markovian dynamics) and strong system-bath interactions (non‑perturbative regimes), where most existing methods fail or require switching between entirely different techniques depending on the parameters. This research presents a single unified framework that can handle all these regimes for interacting quantum spins coupled to bosonic environments.

The approach combines Schwinger-Keldysh field theory with the two‑particle‑irreducible (2PI) effective action and crucially uses a 1/N expansion of Schwinger bosons rather than a perturbative expansion in the system-bath coupling. This allows the method to remain accurate even in strongly non‑perturbative regimes. The framework can compute advanced quantities such as multitime spin correlations, which are essential for understanding quantum phase transitions and nonequilibrium transport in quantum materials.

The authors benchmark their method against quasi‑exact tensor‑network simulations of the spin‑boson model, showing excellent agreement in the regimes where tensor‑network methods are applicable, and then apply it to more complex spin‑chain models with multiple baths where no other method currently works. Because it supports arbitrary spin value, geometry, dimensionality, and bath spectral function, the framework offers a general and computationally tractable route to simulating many‑body open quantum systems.

Overall, this work provides a powerful field‑theoretic tool for studying driven‑dissipative quantum systems, with applications ranging from quantum computing to quantum magnonics and spintronics.

Do you want to learn more about this topic?

Keldysh field theory for driven open quantum systems by L M SiebererM Buchhold and S Diehl (2016)

Sunken nuclear submarine is leaking radioactive material intermittently

In April 1989 the Soviet Navy’s nuclear submarine Komsomolets caught fire while cruising 335 m beneath the surface of the Norwegian Sea. It was able to surface and 27 of 69 crew members survived the ordeal. The vessel then sank and now lies in 1680 m of water about 180 km off the coast of Norway’s Bear Island.

As well as being powered by a nuclear reactor, the Komsomolets is believed to contain two torpedo-mounted nuclear warheads. Not surprisingly, people are very concerned about the wreck and the possibility of radioactive materials leaking from the vessel.

Indeed, a Russian expedition in 1994 revealed that plutonium was leaking from one of the warheads. The following year, fractures in the hull and the torpedo tubes was sealed. Since then measurements taken near the Komsomolets suggest that any radioactive leakage is rapidly diluted by the surrounding water.

Now, scientists in Norway led by Justin Gwynn and Hilde Elise Heldal, have completed a comprehensive analysis of data taken by a 2019 survey of Komsomolets. The wreck’s marine environment was explored using Ægir 6000, which is a remote-controlled vehicle that is equipped with an array of cameras and other instruments and is capable of diving to 6000 m.

Writing in the Proceedings of the National Academy of Sciences, the team says analysis of seawater and sediment samples collected near the torpedo compartment reveals no evidence of plutonium being released from the warheads. However, analysis of samples from near a ventilation pipe show that radioactive material is being released intermittently from the nuclear reactor. By measuring the ratio of plutonium to uranium in the region, the team concluded that the fuel in the reactor is corroding.

Despite releases over the past three decades, Ægir 6000 found little evidence that radionuclides were accumulating in the region of the wreck – most likely because of the diluting effect of seawater.

The research is described in PNAS, where the team concludes, “Considering the global increase in military activities and geopolitical tensions, the fate of Komsomolets and the nuclear material within it can provide us with important insights as to impacts of any future accident involving nuclear powered vessels and nuclear weapons at sea”.

Electrosolvation force can act over long distances

Electrosolvation experiment

Two particles carrying electrical charge with the same sign should not attract each other, but in recent years, researchers have found that they can do this when they are dispersed in a liquid. A team at the University of Oxford in the UK has now discovered that the distance over which this counterintuitive “electrosolvation” force acts is much longer than theoretical models currently predict. They have also shown that the range of the force can depend on particle properties such as size and surface chemistry.

“The new finding reveals a missing piece in our understanding of electrostatic forces in liquids,” says physical chemist Madhavi Krishnan, who led this research study. “It is likely to reshape our understanding of how biological matter may self-organize and how molecules like DNA, RNA and proteins may naturally condense and cluster inside cells.”

In their work, Krishnan and colleagues used optical imaging to observe how pairs of charged micron-sized spheres with various surface coatings, such as DNA, polypeptides and anionic lipid bilayers (which make up cell membranes) interact in water.

Not a uniform medium

“Conventional electrostatic models treat the solvent as a uniform medium with a dielectric constant, but real liquids (such as water) cannot be described in this way because they form hydrogen bond networks and orient themselves around surfaces. Liquids also exhibit long-range correlations. All these properties may play a role in giving rise to an additional force which we call the electrosolvation force,” explains Krishnan.”

To be able to come up with a comprehensive understanding of the electrosolvation interaction, we have to dissect and carefully examine the phenomenology in question, she explains. A key feature of an interaction is its range. To measure the range of the attractive electrosolvation force accurately, Krishnan says that the students who carried out the experiments – her graduate student Sida Wang in particular – performed careful microscopy measurements on particles interacting with each other, observing individual pairs for periods of up to an hour and sometimes longer.

“We also performed exhaustive computer simulations to vet the measurements and estimate their accuracy,” Krishnan adds.

The researchers observed that DNA-coated particles exhibit particularly long-range attraction, which implies that the interaction depends not only on the solvent but also on the chemical and physical structure of the particles’ surface. This contrasts with the long-held view that the (Debye) screening length governing the interaction of charged particles in solutions depends only on the properties of the solvent medium.

Krishnan explains that the measured range of the attractive electrosolvation force can significantly exceed the nominal Debye length is to our knowledge not readily accounted for within any existing theoretical view and points to major gaps in our understanding of this very basic and fundamental question of how two charged particles interact in a liquid.. Indeed, it highlights the need for a more sophisticated view of the intervening medium than that offered by standard continuum electrostatics models.

“Current electrostatic models are incomplete”

“In short, anionic matter seems poised to attract; and the ability to either attract or repel in water, depending on the conditions, appears to be an intrinsic feature of negatively charged matter,” she tells Physics World. “It is entirely possible that the underlying mechanisms behind this process are broadly exploited in biology.”

The new work, which is detailed in Reports on Progress in Physics is the most recent in a series of investigations on the physics of interparticle interactions in the fluid phase, she says, and once again shows that current electrostatic models are incomplete – even under conditions in which they are expected to work well.

Looking ahead, the researchers say they would now like to examine the same interactions in bulk solution and compare these observations in the sedimented colloids studied in the present work.

Spectroscopic OCT plus AI detects high-risk plaque in coronary arteries

AI-based OCT detection of lipid-rich plaque

Identifying lipid-rich plaques inside coronary arteries is critical to assess a patient’s risk of having a heart attack. These fatty deposits adhere to the walls of blood vessels and, if they rupture, can trigger adverse cardiovascular events.

Currently, physicians use near-infrared spectroscopy and intravascular ultrasound (NIRS-IVUS) to quantitatively assess plaque lipid burden. Optical coherence tomography (OCT) is another intravascular imaging modality used during catheter-based procedures and provides micrometre-resolution visualization of plaque structure. But its diagnostic accuracy is limited by imaging artefacts and signals originating from non-lipid plaque components.

Researchers in Korea are developing a different approach: combining the biochemical specificity of spectroscopic OCT (S-OCT) with artificial intelligence (AI). This combination enables automated, composition-aware tissue characterization, and offers interpretable and annotation-efficient lipid mapping.

“By enabling efficient lipid screening and spatial interpretation, [the technique] establishes a scalable foundation for downstream assessment of lipid burden and clinically relevant plaque characterization, with potential utilization for automated risk stratification,” the researchers explain in Biomedical Optics Express.

Model training and validation

The AI-enhanced S-OCT system, which utilizes existing OCT systems without requiring hardware modification, incorporates an AI model developed by researchers at the Korea Advanced Institute of Science and Technology (KAIST) and the Multimodal Imaging and Theranostic Lab of the Korea University Guro Hospital. The AI model receives wavelength-dependent information from the OCT images and, by recognizing signal patterns associated with lipid-rich tissue, automatically highlights any suspicious regions in the image.

Team leader Hyeong Soo Nam from KAIST and collaborators created a dataset to train the AI model, using 848 lipid-positive and 622 non-lipid frames acquired from images of five rabbits with induced atherosclerotic plaques. They manually annotated each OCT frame to indicate lipid presence, and employed complex calibrated interferometric signals obtained through standard OCT processing to extract depth-resolved spectroscopic information for S-OCT. Finally, they applied a vessel region selection procedure with a depth range selected to focus the analysis on biologically relevant vessel regions with potential lipid content.

After training the deep-learning model, the researchers evaluated its performance in classifying lipid presence and spatial localization of lipid-associated regions, on both lipid-positive and non-lipid image frames. They also conducted histopathological validations to validate the predictions against ground truth, and compared the performance of the trained S-OCT model with an identically trained greyscale OCT-only model to assess the benefit of incorporating spectroscopic identification.

When assessing the relative importance of distinct spectral regions for lipid detection, the researchers discovered that training the model on a short-wavelength spectral subset (below 1300 nm) resulted in a higher lipid localization Dice score (a similarity metric) than a model trained on the long-wavelength band (above 1300 nm).

“This performance difference suggests the network relies more on spectroscopic features in the short-wavelength region, where lipid absorption shows a more pronounced spectral gradient,” they write. “The superior performance with short-wavelength data implies that the model effectively utilizes this spectral gradient, rather than relying on shared morphological features, to enhance lipid detection.”

The researchers validated their approach by imaging two rabbits with atherosclerotic plaques, and comparing the AI-generated predictions against histopathology results using lipid-specific tissue staining. The proposed model accurately localized lipid regions with strong spatial correspondence to histology, achieving a lipid localization Dice score of 83.9%.

“The results showed strong classification performance along with good spatial agreement with the pathological findings,” says Nam in a press statement. “By analysing wavelength-dependent information hidden in the OCT signal and combining it with AI, we were able to identify the presence and distribution of lipids within the vessel wall.”

“During a coronary intervention, this method could provide clinicians with additional information to support risk assessment, procedural planning and evaluation of treatment response,” Nam emphasizes. “Ultimately it has the potential to contribute to safer clinical decision making, more individualized treatment strategies and improved long-term management of patients with coronary artery disease.”

The team is currently working to improve the processing speed and robustness of their approach to make it more practical for real-time clinical use, and plan to perform validation studies using data from human coronary arteries. In addition, they aim to create a seamless method for integrating data reporting (the presence or absence of lipid plaque) into the clinical workflow.

Rocket re-entry pollutes the upper atmosphere

Thanks to new resonance lidar measurements, researchers in Germany, the UK and Peru have successfully measured and traced a lithium plume created by a rocket stage as it uncontrollably re-entered and broke up in the upper atmosphere. The work represents the first time that upper-atmospheric pollution from space debris re-entry has been directly detected, they say. Such pollution is a growing concern and is only likely to worsen as more and more satellites are being launched into space, and in particular into low-Earth orbit.

The number of satellite and rocket launches has increased dramatically over the last decade and this number is set to increase as ever more commercial mega-constellations are deployed. For example, the Starlink constellation is planned to consist of over 40 000 satellites, each with a mass of between 305 and 960 kg. Given their typical operational lifetimes of five years, these satellites are expected to re-enter Earth’s atmosphere through uncontrolled decay within the next several years.

Previous studies in this domain have mainly focused on the dangers of space debris falling to the ground, but we still know little about the environmental effects that the debris can have on our atmosphere. We do know, however, that the upper atmosphere is today host to many exotic atomic and molecular species that cannot be explained as having naturally come from meteors. This is worrying since the upper atmosphere is crucial for shielding life on Earth from meteoroids and UV radiation.

An intense fireball

At roughly 03:42 UTC on 19 February 2025, the upper stage of a SpaceX Falcon 9 rocket uncontrollably re-entered the atmosphere at an altitude of around 100 km, off the western coast of Ireland. This event produced an intense fireball that many people (and radar systems) were witness to, as well as a persistent high-altitude plume of lithium vapour. It also made headline news when fragments of the debris, including a fuel tank, were recovered near Poznań in Poland.

A team of researchers led by Robin Wing of the Leibniz Institute of Atmospheric Physics in Germany measured the concentration of lithium atoms in the mesosphere (which lies between 50 and 85 km in altitude) and the lower thermosphere (between around 85 and 120 km in altitude). They detected the lithium plume in the latter, using a resonance fluorescence lidar in Kühlungsborn in Germany. They also used locally measured winds from the SIMONe Germany meteor radar and global winds from the Upper Atmosphere ICON (UA-ICON) model to determine the path the lithium plume took and where it had originated.

Lidar is a laser-based remote sensing instrument that can be used to measure conditions in the atmosphere. The researchers chose to focus on lithium because it is routinely employed in spacecraft components, such as lithium-ion batteries and lithium-aluminium (Li-Al) alloy hull plating, but it is only naturally present in trace amounts at the altitudes studied. The flux of natural lithium (which comes from meteoric sources) is estimated to be around 80 g per day, while the amount of lithium contained in a single rocket stage is about 30 kg. This large disparity therefore makes lithium a sensitive tracer of man-made input from space debris re-entries, explain the researchers.

Vaporization of lithium begins at approximately 98 km altitude

Scientists already know that lithium rapidly vaporizes when a Li-Al structure ablates and it appears in the atmosphere as the aluminium matrix melts at 933 K. In their work, Wing and colleagues estimated the altitudes at which a Li-Al hull will begin to melt using the Leeds Chemical Ablation Model. For the hull thickness of the Falcon 9, they expect melting and vaporization of lithium to begin at approximately 98 km.

The strong atomic resonance fluorescence line of lithium at 670.7926 nm allows lidar to detect very trace amounts of lithium, both in the mesosphere and lower thermosphere. This enabled the researchers to perform altitude- and time-resolved measurements of the amount of lithium during and after re-entry events. Thanks to their measurements during six hours on the night of 19-20 February, they detected a sudden increase in the signal at about 96 km altitude, by a factor of 10 from the baseline value, just after midnight UTC on 20 February.

The results from this work, which is detailed in Communications Earth & Environment, also back up a recent study of the lower stratosphere conducted by Daniel Murphy at the NOAA and colleagues, which attributed significant middle-atmospheric pollution to space debris.

Potential harm to the ozone layer

Analysing the impact of re-entering space debris on the atmosphere is quite new, says Wing. “The paper by Murphy and colleagues, which was published in 2023, showed that 10% of stratospheric aerosols are already contaminated by materials from space debris. This previous work really motivated us to build a lidar capable of measuring what is left behind when rockets or satellites disintegrate in the atmosphere.”

The primary concern surrounding how space debris impacts the atmosphere is currently potential harm to the ozone layer, he tells Physics World. “Our work shows that we can now measure emissions from re-entering space objects and can use winds from radar observations and models to identify potential sources. By applying similar or improved setups to ours around the globe, the scientific community could provide the space industry with solid findings so we can all optimize the use of space.”

The researchers say they are now working on building a new and improved system to measure lithium and sodium. “We would also like to conduct the first survey of various metals such as copper, titanium and lead in the atmosphere that could be connected to space debris,” says Wing.

Academic collaboration with industry is no longer optional – it is now essential

Anyone paying even cursory attention to the research landscape in recent months would have noticed the growing turbulence in public science funding on both sides of the Atlantic. In the UK, the research community has been shaken not by a single dramatic cut, but by a prolonged period of budgetary tightening at UK Research and Innovation (UKRI), driven by flat-cash settlements, rising inflation and increasing pressure to redirect funding towards government-defined missions.

Although government ministers continue to emphasize “record” overall R&D spending, UKRI has been forced to make difficult reprioritization decisions, leading to pauses and closures of several schemes across the research councils. The effects are already being felt, with competition for remaining funding intensifying. Success rates are coming under strain and many researchers are facing heightened uncertainty about the viability of pursuing curiosity-driven research.

Globally, the picture is similar. In the US, the National Science Foundation has become a focal point of intense budgetary uncertainty, with proposed reductions and flat-cash congressional settlements placing growing strain on its ability to sustain investigator-led research. In Europe the €95.5bn Horizon Europe programme faces mounting political pressure to demonstrate impact and value for money amid economic uncertainty and competing fiscal priorities.

For academics, these dynamics translate into tougher competition for grants, longer odds of success and an increasing reliance on short-term, project-specific funding rather than stable, long-horizon research support.

Academic science has always been under pressure to deliver more with less. But the current climate feels different. The combination of shrinking government budgets, rising operational costs and increasing competition for limited grants has created a perfect storm. For early-career researchers and established labs alike, the traditional model of securing public funding is becoming unsustainable.

The implications are profound. Without adequate resources, research groups risk losing momentum, empty talent pipelines and stalling innovation. For many the question is no longer “how do we grow?” but “how do we survive?” Yet amid these challenges lies an opportunity: forging deeper, more strategic partnerships with industry.

The path ahead

You may ask the question “why would companies invest in academic research? The answer is simply innovation. Industry thrives on differentiation and academic partnerships offer a cost-effective way to access cutting-edge science without bearing the full burden of in-house R&D.

Consider the pharmaceutical sector. Drug discovery is notoriously expensive and time-consuming but collaborating with academic labs allows companies to tap into specialized expertise, advanced facilities and novel methodologies. Similarly in energy and materials science, universities often lead the way in developing next-generation technologies that can redefine markets.

Beyond innovation, partnerships also offer credibility. Peer-reviewed publications and independent validation enhance a company’s reputation and can accelerate regulatory approval. For industries facing complex challenges; such as sustainability, cybersecurity or quantum computing, academic collaboration is not a luxury; it’s a necessity.

So, what can be done to strengthen academic collaboration with industry? The first step is a subtle but important mindset shift. For many researchers, academia has traditionally operated with a strong internal focus, where industry engagement is seen less as undesirable and more as additional – something that sits alongside core research rather than at its centre.

This isn’t about viewing collaboration as secondary or compromising, but about recognizing that aligning fundamental research with industry priorities takes time and sustained effort. It introduces new constraints, different timelines and added complexity into already demanding research programmes.

The challenge, then, is not one of principle, but of practicality. Collaboration is not about box-ticking or “selling out”; it’s about creating the conditions in which fundamental research can remain connected, impactful and resilient in an increasingly complex research ecosystem.

Academics should look for companies with long-term goals that align with their research expertise – creating shared value, not just chasing sponsorships. Another aspect to remember is that industry mandates tangible outcomes. While fundamental research remains vital, framing projects in terms of applied benefits can unlock funding.

It is also important that academics learn to communicate impact. Industry leaders speak the language of “minimum viable product”, “return on investment” and “risk mitigation”. Academics must learn to articulate how their work translates into competitive advantage.

This mindset shift requires effort, but the payoff can be significant in sustained funding streams, access to real-world data, as well as opportunities to test theories in practical settings. When done right, such collaborations also create a virtuous cycle. Academics secure funding and maintain research momentum, while industry gains competitive advantage, joint publications, shared intellectual property and co-developed technologies that strengthen both ecosystems.

Such partnerships can also foster talent development. Graduate students and postdocs gain exposure to realistic problems, enhancing employability and bridging the gap between theory and practice – a critical outcome, given the current bleak outlook for graduate employment worldwide.

For industry, this means access to a pipeline of skilled professionals who understand both scientific rigor and commercial realities. The benefits also extend beyond economics. Collaborative projects often tackle grand challenges – climate change, healthcare, digital security – that no single entity can solve alone. By pooling resources and expertise, academia and industry can drive progress at a scale that matters.

An industrial collaboration ‘playbook’

Mark Procter outlines his five principles for building successful partnerships between academia and industry.

1 Align on impact, not just intellectual property

Focus on creating measurable outcomes rather than solely on rigid intellectual property battles. Impact drives funding and reputation for both sides.

2 Define mutual gains early

Establish clear objectives that benefit both academic advancement and industrial innovation. Document these in a collaboration charter before work begins.

3 Streamline governance

Simplify legal frameworks and reduce administrative friction. Negotiating non-disclosure and intellectual property agreements should not take longer than the research itself.

4 Embed talent exchange

Include opportunities for student placements, joint supervision and secondments. This builds trust and creates a pipeline of skilled professionals. Reciprocally, universities should structure their own professional development opportunities in collaboration with industry.

5 Measure success beyond publications

Track metrics such as technology readiness progression, prototype development and demonstrable economic impact, not just journal citations.

To make this vision a reality, collaboration must be incentivized. Funding agencies can play a pivotal role by enabling grants that include industrial partners, while tax incentives for collaborative R&D could further accelerate uptake.

At the same time, universities must embrace cultural change. Academics must move beyond the notion that collaboration dilutes scientific integrity. Transparency and clear governance can safeguard independence while enabling impact.

The future of academic science may well depend on its ability to align with industry. The current rhetoric from UKRI focuses on return on investment from publicly funded research to meet the UK’s industrial strategy.

In a world where resources are scarce and challenges are complex, working together is the only way forward. The coming decade will test the resilience of academic research. Those who cling to old models risk obsolescence. Those who adapt by embracing industry partnerships will not only survive but thrive. The question is not whether collaboration is necessary, but how quickly we can make it happen.

Could lightning occur on Mars?

Researchers in the Czech Republic say they may have observed the signature of a “whistler” in a one-second snapshot captured by the MAVEN probe orbiting Mars. The event, observed in the ionosphere of the planet, would be the first lightning-like electric discharge activity ever to be seen there and the finding will be important for understanding atmospheric processes in the Martian atmosphere.

“Whistlers are well known on Earth and are associated with lightning,” explains space physicist František Němec at Charles University, who led this research effort. “Our result implies that this phenomenon also occurs on our planetary neighbour.”

Unlike Earth, Mars does not have a global magnetic field, but only localized fields created by magnetized materials in the planet’s crust. And because its atmosphere is thin, lightning on this planet does not originate in water clouds but instead in dust storms, similar to those observed in volcanic eruptions here on Earth, and in dust devils.

During dust storms, dust grains become electrically charged as they collide with each other and generate an electric field. On Mars, previous studies have predicted that this field can discharge when its value exceeds the breakdown threshold in the low-pressure Martian atmosphere, which is around 15 kV/m.

Dust devils, for their part, can produce ultralow-frequency radiation on Earth thanks to the electrical charges that fluctuate as the dust swirls around. Since both dust devils and dust storms are much stronger on Mars, theory suggests that they could generate wideband radiation that we could detect on Earth. Despite recent measurements by the Allen Telescope Array, the Mars Global Surveyor (MGS) and Mars Atmosphere and Volatile Evolution (MAVEN) missions and the Mars Express spacecraft, conclusive evidence for Martian lightning has yet to be found.

Analysing electromagnetic radiation

Another way to detect these electric discharges, says Němec, is to analyse the electromagnetic radiation that accompanies them. This radiation lies in the extremely low frequency/very low frequency range and, under some conditions, can reach the ionosphere of a planet. The phenomenon was first identified and observed on Earth shortly before the space era and such electromagnetic waves have successfully been used to provide evidence for lightning on Jupiter, Saturn and Neptune since then.

These waves are known as whistlers, he explains, because of their characteristic spectral pattern in the plasma medium of the ionosphere. Here, higher frequency waves propagate faster and arrive at the observation point sooner than lower frequency ones, resulting in a characteristic “whistling” spectral shape.

The observational challenge is that these waves can penetrate the Martian ionosphere only on the nightside of the planet and when the magnetic field is pointing in the vertical direction. This largely restricts the areas over Mars which whistlers can be observed by spacecraft – namely, to the relatively small crustal field regions in the southern hemisphere of the planet.

Němec says he has now identified the electromagnetic wave signature of a whistler on Mars in a snapshot captured by the MAVEN probe on 21 June 2015. “I first identified it at night in a region with a strong and nearly vertical magnetic field, something that is crucial for the wave to be able to propagate to the altitude of where the probe is orbiting without its signal attenuating too much.”

Out of the many wave snapshots analysed (108,418 in total), only this single event contained a whistler signature, he tells Physics World. “This likely reflects the rarity of the phenomenon itself, as well as the specific ionospheric and magnetic field conditions required for the wave to propagate all the way to the spacecraft.”

The MAVEN probe has been orbiting Mars since 2014 and sent back data to Earth until we lost communication with it last year. While no large-scale dust storms were recorded on the planet at the moment at which the probe captured the whistler, Němec and colleagues say the effect might have come from a local dust event.

Different propagation speeds

“Whistlers are formed because, in the ionized plasma of the ionosphere, different signal frequencies propagate at different speeds,” explains Němec. “As a result, although all frequencies are generated simultaneously during a lightning discharge, the higher frequencies – which propagate faster – arrive at the spacecraft first, followed later by the lower frequencies.”

The researchers, who detail their work in Science Advances, calculated these corresponding time delays and say that their observations agree “very well” with theoretical predictions. They also calculated how the waves attenuate by adapting methods used for Earth to the assumed composition of the Martian ionosphere. The results revealed that higher frequencies are more strongly attenuated, which explains why only the lower-frequency portion of the whistler is observed, says Němec.

The existence of strong lightning-like electrical discharges in the Martian atmosphere highlights the need to better understand the relevant atmospheric processes on the Red Planet, with a particular focus on dust storms and dust devils, he adds. The sudden energy release accompanying such discharges also clearly has the potential to locally alter the atmospheric chemistry.

The Charles University researchers together with their colleagues at the Institute of Atmospheric Physics say they are now working on a detailed analysis of how waves attenuate as they propagate through Mars’ ionosphere. “Importantly, we are actively involved in the design and development of the European Space Agency’s M7 mission candidate M-MATISSE,” reveals Němec. “This two-spacecraft mission, scheduled for launch in 2037, would feature advanced instrumentation for, among other things, wave measurements, and would allow for more detailed investigations of the relevant phenomena.

“We are very excited about this opportunity and hope that the mission will ultimately be adopted.”

Is ‘vibe physics’ the future?

At the American Physical Society’s Global Physics Summit in Denver, a session on “Navigating the AI revolution: future-proofing your science career” drew in a crowd of early-stage physicists searching for practical career advice. What they received was much more philosophical in nature.

Malachi Schram of the Pacific Northwest National Lab and Hilary Egan of the National Laboratory of the Rockies delivered back-to-back talks full of similar rhetoric, emphasizing the fast-paced development of AI used for specialized tasks in science, such as detecting equipment failure or identifying ways of retrofitting older buildings.

But the third speaker, Matthew Schwartz, a theoretical physicist from Harvard University, took his optimism about AI far further. In a punchy presentation, he predicted that large language models (LLMs) will surpass human intelligence in five years.

“There’s definitely exponential growth of the intellectual capacity of these [large language] models as a function of time,” Schwartz told the audience, using the number of model parameters as a proxy for intelligence. “The machines are still growing by roughly 10 times each year, and we” – he paused for dramatic effect – “are not growing much smarter.” This drew a wave of laughter from the crowd.

Unlike humans, machines can visualize higher dimensional spaces, hold far more information in memory and process more complex equations. “We are not the endpoint of intelligence. We are only the smartest things to evolve on Earth so far,” Schwartz argued. He went on to suggest that humans may simply be incapable of understanding long-standing physics problems such as a theory of everything. He compared it to cats, which he suggested will never understand chess.

If the talent of physicists exists on a bell curve, Schwartz claimed we can push the bell curve higher on the talent axis: “If we use AI augmentation, we can get 10,000 Einsteins a century instead of one Einstein.”

Opposing views

The next speaker after Schwartz was Matthew Ginsberg from Google Deepmind. Speaking in a personal capacity, he expressed strong disagreement over AI’s ability to advance physics. “Asking questions is the essence of being a good physicist, and this is, at least so far, 100% our domain,” he argued. In a direct response to Schwartz, Ginsberg added, “We aren’t being eaten away at exponentially. No, asking good questions is us. It’s what we’re good at.” He concluded, “I remain hopeful that we have a role to play.”

In his talk, Ginsberg emphasized the importance of human creativity. “LLMs generate the consensus response to hard questions,” he said. “You are the best physicist when you give the not consensus answer, which is what AI is incapable of doing.”

In a concluding panel discussion, the four experts seemed to converge on the idea that the human contribution to physics has to do with what some called “taste,” others “creativity,” or “asking good questions” (seemingly, questions that humans find interesting). However, over the three-hour session, Schwartz and Ginsberg independently predicted that AI may develop the ability to ask good questions in the next decade.

If so, this could undermine the main argument for the value of humans in science. So, does there exist a deeply human role in the physics of the future, or is “vibe physics” on the 10-year horizon? Perhaps only time will tell.

Copyright © 2026 by IOP Publishing Ltd and individual contributors